00:00:00.000 Started by upstream project "autotest-per-patch" build number 131189 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.010 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.011 The recommended git tool is: git 00:00:00.011 using credential 00000000-0000-0000-0000-000000000002 00:00:00.013 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.028 Fetching changes from the remote Git repository 00:00:00.030 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.051 Using shallow fetch with depth 1 00:00:00.051 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.051 > git --version # timeout=10 00:00:00.087 > git --version # 'git version 2.39.2' 00:00:00.087 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.138 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.138 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.283 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.295 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.306 Checking out Revision 3f5fbcceba25866ebf7e22fd0e5d30548272f62c (FETCH_HEAD) 00:00:02.306 > git config core.sparsecheckout # timeout=10 00:00:02.319 > git read-tree -mu HEAD # timeout=10 00:00:02.336 > git checkout -f 3f5fbcceba25866ebf7e22fd0e5d30548272f62c # timeout=5 00:00:02.352 Commit message: "packer: Bump java's version" 00:00:02.352 > git rev-list --no-walk 3f5fbcceba25866ebf7e22fd0e5d30548272f62c # timeout=10 00:00:02.543 [Pipeline] Start of Pipeline 00:00:02.557 [Pipeline] library 00:00:02.559 Loading library shm_lib@master 00:00:02.559 Library shm_lib@master is cached. Copying from home. 00:00:02.577 [Pipeline] node 00:00:02.588 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest_2 00:00:02.589 [Pipeline] { 00:00:02.601 [Pipeline] catchError 00:00:02.603 [Pipeline] { 00:00:02.616 [Pipeline] wrap 00:00:02.626 [Pipeline] { 00:00:02.634 [Pipeline] stage 00:00:02.637 [Pipeline] { (Prologue) 00:00:02.655 [Pipeline] echo 00:00:02.656 Node: VM-host-WFP7 00:00:02.663 [Pipeline] cleanWs 00:00:02.672 [WS-CLEANUP] Deleting project workspace... 00:00:02.672 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.678 [WS-CLEANUP] done 00:00:02.859 [Pipeline] setCustomBuildProperty 00:00:02.938 [Pipeline] httpRequest 00:00:03.321 [Pipeline] echo 00:00:03.322 Sorcerer 10.211.164.101 is alive 00:00:03.331 [Pipeline] retry 00:00:03.333 [Pipeline] { 00:00:03.347 [Pipeline] httpRequest 00:00:03.351 HttpMethod: GET 00:00:03.352 URL: http://10.211.164.101/packages/jbp_3f5fbcceba25866ebf7e22fd0e5d30548272f62c.tar.gz 00:00:03.352 Sending request to url: http://10.211.164.101/packages/jbp_3f5fbcceba25866ebf7e22fd0e5d30548272f62c.tar.gz 00:00:03.353 Response Code: HTTP/1.1 200 OK 00:00:03.353 Success: Status code 200 is in the accepted range: 200,404 00:00:03.354 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/jbp_3f5fbcceba25866ebf7e22fd0e5d30548272f62c.tar.gz 00:00:03.500 [Pipeline] } 00:00:03.517 [Pipeline] // retry 00:00:03.524 [Pipeline] sh 00:00:03.809 + tar --no-same-owner -xf jbp_3f5fbcceba25866ebf7e22fd0e5d30548272f62c.tar.gz 00:00:03.824 [Pipeline] httpRequest 00:00:04.225 [Pipeline] echo 00:00:04.227 Sorcerer 10.211.164.101 is alive 00:00:04.237 [Pipeline] retry 00:00:04.239 [Pipeline] { 00:00:04.253 [Pipeline] httpRequest 00:00:04.257 HttpMethod: GET 00:00:04.257 URL: http://10.211.164.101/packages/spdk_0ea3371f35a221fe618426532d41f6f07af18781.tar.gz 00:00:04.257 Sending request to url: http://10.211.164.101/packages/spdk_0ea3371f35a221fe618426532d41f6f07af18781.tar.gz 00:00:04.258 Response Code: HTTP/1.1 200 OK 00:00:04.259 Success: Status code 200 is in the accepted range: 200,404 00:00:04.259 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/spdk_0ea3371f35a221fe618426532d41f6f07af18781.tar.gz 00:00:22.765 [Pipeline] } 00:00:22.785 [Pipeline] // retry 00:00:22.793 [Pipeline] sh 00:00:23.082 + tar --no-same-owner -xf spdk_0ea3371f35a221fe618426532d41f6f07af18781.tar.gz 00:00:25.629 [Pipeline] sh 00:00:25.913 + git -C spdk log --oneline -n5 00:00:25.913 0ea3371f3 thread: Extended options for spdk_interrupt_register 00:00:25.913 e85295127 util: fix total fds to wait for 00:00:25.913 6e2689c80 util: handle events for vfio fd type 00:00:25.913 e99566256 util: Extended options for spdk_fd_group_add 00:00:25.913 091b7aab9 test/unit: add missing fd_group unit tests 00:00:25.931 [Pipeline] writeFile 00:00:25.942 [Pipeline] sh 00:00:26.246 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:26.258 [Pipeline] sh 00:00:26.541 + cat autorun-spdk.conf 00:00:26.541 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:26.541 SPDK_RUN_ASAN=1 00:00:26.541 SPDK_RUN_UBSAN=1 00:00:26.541 SPDK_TEST_RAID=1 00:00:26.541 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:26.549 RUN_NIGHTLY=0 00:00:26.550 [Pipeline] } 00:00:26.562 [Pipeline] // stage 00:00:26.575 [Pipeline] stage 00:00:26.577 [Pipeline] { (Run VM) 00:00:26.589 [Pipeline] sh 00:00:26.871 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:26.871 + echo 'Start stage prepare_nvme.sh' 00:00:26.871 Start stage prepare_nvme.sh 00:00:26.871 + [[ -n 7 ]] 00:00:26.871 + disk_prefix=ex7 00:00:26.871 + [[ -n /var/jenkins/workspace/raid-vg-autotest_2 ]] 00:00:26.871 + [[ -e /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf ]] 00:00:26.871 + source /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf 00:00:26.871 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:26.871 ++ SPDK_RUN_ASAN=1 00:00:26.871 ++ SPDK_RUN_UBSAN=1 00:00:26.871 ++ SPDK_TEST_RAID=1 00:00:26.871 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:26.871 ++ RUN_NIGHTLY=0 00:00:26.871 + cd /var/jenkins/workspace/raid-vg-autotest_2 00:00:26.871 + nvme_files=() 00:00:26.871 + declare -A nvme_files 00:00:26.871 + backend_dir=/var/lib/libvirt/images/backends 00:00:26.871 + nvme_files['nvme.img']=5G 00:00:26.871 + nvme_files['nvme-cmb.img']=5G 00:00:26.871 + nvme_files['nvme-multi0.img']=4G 00:00:26.871 + nvme_files['nvme-multi1.img']=4G 00:00:26.871 + nvme_files['nvme-multi2.img']=4G 00:00:26.871 + nvme_files['nvme-openstack.img']=8G 00:00:26.871 + nvme_files['nvme-zns.img']=5G 00:00:26.871 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:26.871 + (( SPDK_TEST_FTL == 1 )) 00:00:26.871 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:26.871 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:26.871 + for nvme in "${!nvme_files[@]}" 00:00:26.871 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:00:26.871 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:26.871 + for nvme in "${!nvme_files[@]}" 00:00:26.871 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:00:26.871 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:26.871 + for nvme in "${!nvme_files[@]}" 00:00:26.871 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:00:26.871 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:26.871 + for nvme in "${!nvme_files[@]}" 00:00:26.871 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:00:26.871 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:26.871 + for nvme in "${!nvme_files[@]}" 00:00:26.871 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:00:26.871 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:26.871 + for nvme in "${!nvme_files[@]}" 00:00:26.871 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:00:26.871 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:26.871 + for nvme in "${!nvme_files[@]}" 00:00:26.871 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:00:27.130 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:27.130 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:00:27.130 + echo 'End stage prepare_nvme.sh' 00:00:27.130 End stage prepare_nvme.sh 00:00:27.140 [Pipeline] sh 00:00:27.422 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:27.422 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora39 00:00:27.422 00:00:27.422 DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant 00:00:27.422 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk 00:00:27.422 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_2 00:00:27.422 HELP=0 00:00:27.422 DRY_RUN=0 00:00:27.422 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:00:27.422 NVME_DISKS_TYPE=nvme,nvme, 00:00:27.422 NVME_AUTO_CREATE=0 00:00:27.422 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:00:27.422 NVME_CMB=,, 00:00:27.422 NVME_PMR=,, 00:00:27.422 NVME_ZNS=,, 00:00:27.422 NVME_MS=,, 00:00:27.422 NVME_FDP=,, 00:00:27.422 SPDK_VAGRANT_DISTRO=fedora39 00:00:27.422 SPDK_VAGRANT_VMCPU=10 00:00:27.422 SPDK_VAGRANT_VMRAM=12288 00:00:27.422 SPDK_VAGRANT_PROVIDER=libvirt 00:00:27.422 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:27.422 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:27.422 SPDK_OPENSTACK_NETWORK=0 00:00:27.422 VAGRANT_PACKAGE_BOX=0 00:00:27.422 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:27.422 FORCE_DISTRO=true 00:00:27.422 VAGRANT_BOX_VERSION= 00:00:27.422 EXTRA_VAGRANTFILES= 00:00:27.422 NIC_MODEL=virtio 00:00:27.422 00:00:27.422 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt' 00:00:27.422 /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_2 00:00:29.954 Bringing machine 'default' up with 'libvirt' provider... 00:00:30.213 ==> default: Creating image (snapshot of base box volume). 00:00:30.213 ==> default: Creating domain with the following settings... 00:00:30.213 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1728982787_cef2a638e4023e36e8f8 00:00:30.213 ==> default: -- Domain type: kvm 00:00:30.213 ==> default: -- Cpus: 10 00:00:30.213 ==> default: -- Feature: acpi 00:00:30.213 ==> default: -- Feature: apic 00:00:30.213 ==> default: -- Feature: pae 00:00:30.213 ==> default: -- Memory: 12288M 00:00:30.213 ==> default: -- Memory Backing: hugepages: 00:00:30.213 ==> default: -- Management MAC: 00:00:30.213 ==> default: -- Loader: 00:00:30.213 ==> default: -- Nvram: 00:00:30.213 ==> default: -- Base box: spdk/fedora39 00:00:30.213 ==> default: -- Storage pool: default 00:00:30.213 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1728982787_cef2a638e4023e36e8f8.img (20G) 00:00:30.213 ==> default: -- Volume Cache: default 00:00:30.213 ==> default: -- Kernel: 00:00:30.213 ==> default: -- Initrd: 00:00:30.213 ==> default: -- Graphics Type: vnc 00:00:30.213 ==> default: -- Graphics Port: -1 00:00:30.213 ==> default: -- Graphics IP: 127.0.0.1 00:00:30.213 ==> default: -- Graphics Password: Not defined 00:00:30.213 ==> default: -- Video Type: cirrus 00:00:30.213 ==> default: -- Video VRAM: 9216 00:00:30.213 ==> default: -- Sound Type: 00:00:30.213 ==> default: -- Keymap: en-us 00:00:30.213 ==> default: -- TPM Path: 00:00:30.213 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:30.213 ==> default: -- Command line args: 00:00:30.213 ==> default: -> value=-device, 00:00:30.213 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:30.213 ==> default: -> value=-drive, 00:00:30.214 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:00:30.214 ==> default: -> value=-device, 00:00:30.214 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:30.214 ==> default: -> value=-device, 00:00:30.214 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:30.214 ==> default: -> value=-drive, 00:00:30.214 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:30.214 ==> default: -> value=-device, 00:00:30.214 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:30.214 ==> default: -> value=-drive, 00:00:30.214 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:30.214 ==> default: -> value=-device, 00:00:30.214 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:30.214 ==> default: -> value=-drive, 00:00:30.214 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:30.214 ==> default: -> value=-device, 00:00:30.214 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:30.471 ==> default: Creating shared folders metadata... 00:00:30.471 ==> default: Starting domain. 00:00:31.408 ==> default: Waiting for domain to get an IP address... 00:00:49.486 ==> default: Waiting for SSH to become available... 00:00:49.486 ==> default: Configuring and enabling network interfaces... 00:00:54.758 default: SSH address: 192.168.121.246:22 00:00:54.758 default: SSH username: vagrant 00:00:54.758 default: SSH auth method: private key 00:00:57.298 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:05.443 ==> default: Mounting SSHFS shared folder... 00:01:07.348 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:07.348 ==> default: Checking Mount.. 00:01:08.729 ==> default: Folder Successfully Mounted! 00:01:08.729 ==> default: Running provisioner: file... 00:01:10.109 default: ~/.gitconfig => .gitconfig 00:01:10.367 00:01:10.367 SUCCESS! 00:01:10.367 00:01:10.367 cd to /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:01:10.367 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:10.367 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:01:10.367 00:01:10.375 [Pipeline] } 00:01:10.391 [Pipeline] // stage 00:01:10.399 [Pipeline] dir 00:01:10.400 Running in /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt 00:01:10.402 [Pipeline] { 00:01:10.415 [Pipeline] catchError 00:01:10.417 [Pipeline] { 00:01:10.429 [Pipeline] sh 00:01:10.711 + vagrant ssh-config --host vagrant 00:01:10.711 + sed -ne /^Host/,$p 00:01:10.711 + tee ssh_conf 00:01:13.268 Host vagrant 00:01:13.268 HostName 192.168.121.246 00:01:13.268 User vagrant 00:01:13.268 Port 22 00:01:13.268 UserKnownHostsFile /dev/null 00:01:13.268 StrictHostKeyChecking no 00:01:13.268 PasswordAuthentication no 00:01:13.268 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:13.268 IdentitiesOnly yes 00:01:13.268 LogLevel FATAL 00:01:13.268 ForwardAgent yes 00:01:13.268 ForwardX11 yes 00:01:13.268 00:01:13.280 [Pipeline] withEnv 00:01:13.282 [Pipeline] { 00:01:13.295 [Pipeline] sh 00:01:13.577 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:13.577 source /etc/os-release 00:01:13.577 [[ -e /image.version ]] && img=$(< /image.version) 00:01:13.577 # Minimal, systemd-like check. 00:01:13.577 if [[ -e /.dockerenv ]]; then 00:01:13.577 # Clear garbage from the node's name: 00:01:13.577 # agt-er_autotest_547-896 -> autotest_547-896 00:01:13.577 # $HOSTNAME is the actual container id 00:01:13.577 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:13.577 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:13.577 # We can assume this is a mount from a host where container is running, 00:01:13.577 # so fetch its hostname to easily identify the target swarm worker. 00:01:13.577 container="$(< /etc/hostname) ($agent)" 00:01:13.577 else 00:01:13.577 # Fallback 00:01:13.577 container=$agent 00:01:13.577 fi 00:01:13.577 fi 00:01:13.577 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:13.577 00:01:13.848 [Pipeline] } 00:01:13.865 [Pipeline] // withEnv 00:01:13.872 [Pipeline] setCustomBuildProperty 00:01:13.885 [Pipeline] stage 00:01:13.887 [Pipeline] { (Tests) 00:01:13.900 [Pipeline] sh 00:01:14.178 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:14.451 [Pipeline] sh 00:01:14.739 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:15.014 [Pipeline] timeout 00:01:15.014 Timeout set to expire in 1 hr 30 min 00:01:15.016 [Pipeline] { 00:01:15.031 [Pipeline] sh 00:01:15.320 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:15.890 HEAD is now at 0ea3371f3 thread: Extended options for spdk_interrupt_register 00:01:15.902 [Pipeline] sh 00:01:16.188 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:16.462 [Pipeline] sh 00:01:16.746 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:17.022 [Pipeline] sh 00:01:17.309 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:17.569 ++ readlink -f spdk_repo 00:01:17.569 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:17.569 + [[ -n /home/vagrant/spdk_repo ]] 00:01:17.569 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:17.569 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:17.569 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:17.569 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:17.569 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:17.569 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:17.569 + cd /home/vagrant/spdk_repo 00:01:17.569 + source /etc/os-release 00:01:17.569 ++ NAME='Fedora Linux' 00:01:17.569 ++ VERSION='39 (Cloud Edition)' 00:01:17.569 ++ ID=fedora 00:01:17.569 ++ VERSION_ID=39 00:01:17.569 ++ VERSION_CODENAME= 00:01:17.569 ++ PLATFORM_ID=platform:f39 00:01:17.569 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:17.569 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:17.569 ++ LOGO=fedora-logo-icon 00:01:17.569 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:17.569 ++ HOME_URL=https://fedoraproject.org/ 00:01:17.569 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:17.569 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:17.569 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:17.569 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:17.569 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:17.569 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:17.569 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:17.569 ++ SUPPORT_END=2024-11-12 00:01:17.569 ++ VARIANT='Cloud Edition' 00:01:17.569 ++ VARIANT_ID=cloud 00:01:17.569 + uname -a 00:01:17.569 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:17.569 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:18.137 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:18.137 Hugepages 00:01:18.137 node hugesize free / total 00:01:18.137 node0 1048576kB 0 / 0 00:01:18.137 node0 2048kB 0 / 0 00:01:18.137 00:01:18.137 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:18.137 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:18.137 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:18.138 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:18.138 + rm -f /tmp/spdk-ld-path 00:01:18.138 + source autorun-spdk.conf 00:01:18.138 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.138 ++ SPDK_RUN_ASAN=1 00:01:18.138 ++ SPDK_RUN_UBSAN=1 00:01:18.138 ++ SPDK_TEST_RAID=1 00:01:18.138 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:18.138 ++ RUN_NIGHTLY=0 00:01:18.138 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:18.138 + [[ -n '' ]] 00:01:18.138 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:18.138 + for M in /var/spdk/build-*-manifest.txt 00:01:18.138 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:18.138 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:18.138 + for M in /var/spdk/build-*-manifest.txt 00:01:18.138 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:18.138 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:18.138 + for M in /var/spdk/build-*-manifest.txt 00:01:18.138 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:18.138 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:18.138 ++ uname 00:01:18.138 + [[ Linux == \L\i\n\u\x ]] 00:01:18.138 + sudo dmesg -T 00:01:18.459 + sudo dmesg --clear 00:01:18.459 + dmesg_pid=5422 00:01:18.459 + sudo dmesg -Tw 00:01:18.459 + [[ Fedora Linux == FreeBSD ]] 00:01:18.459 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:18.459 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:18.459 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:18.459 + [[ -x /usr/src/fio-static/fio ]] 00:01:18.459 + export FIO_BIN=/usr/src/fio-static/fio 00:01:18.459 + FIO_BIN=/usr/src/fio-static/fio 00:01:18.459 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:18.459 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:18.459 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:18.459 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:18.459 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:18.459 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:18.459 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:18.459 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:18.459 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:18.459 Test configuration: 00:01:18.459 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.459 SPDK_RUN_ASAN=1 00:01:18.459 SPDK_RUN_UBSAN=1 00:01:18.459 SPDK_TEST_RAID=1 00:01:18.459 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:18.459 RUN_NIGHTLY=0 09:00:36 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:18.460 09:00:36 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:18.460 09:00:36 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:18.460 09:00:36 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:18.460 09:00:36 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:18.460 09:00:36 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:18.460 09:00:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.460 09:00:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.460 09:00:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.460 09:00:36 -- paths/export.sh@5 -- $ export PATH 00:01:18.460 09:00:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.460 09:00:36 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:18.460 09:00:36 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:18.460 09:00:36 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728982836.XXXXXX 00:01:18.460 09:00:36 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728982836.eA1fyg 00:01:18.460 09:00:36 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:18.460 09:00:36 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:18.460 09:00:36 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:18.460 09:00:36 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:18.460 09:00:36 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:18.460 09:00:36 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:18.460 09:00:36 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:18.460 09:00:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.460 09:00:36 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:18.460 09:00:36 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:18.460 09:00:36 -- pm/common@17 -- $ local monitor 00:01:18.460 09:00:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:18.460 09:00:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:18.460 09:00:36 -- pm/common@21 -- $ date +%s 00:01:18.460 09:00:36 -- pm/common@25 -- $ sleep 1 00:01:18.460 09:00:36 -- pm/common@21 -- $ date +%s 00:01:18.460 09:00:36 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728982836 00:01:18.460 09:00:36 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728982836 00:01:18.460 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728982836_collect-cpu-load.pm.log 00:01:18.460 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728982836_collect-vmstat.pm.log 00:01:19.399 09:00:37 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:19.399 09:00:37 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:19.399 09:00:37 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:19.399 09:00:37 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:19.399 09:00:37 -- spdk/autobuild.sh@16 -- $ date -u 00:01:19.399 Tue Oct 15 09:00:37 AM UTC 2024 00:01:19.399 09:00:37 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:19.399 v25.01-pre-75-g0ea3371f3 00:01:19.399 09:00:37 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:19.399 09:00:37 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:19.399 09:00:37 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:19.399 09:00:37 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:19.399 09:00:37 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.659 ************************************ 00:01:19.659 START TEST asan 00:01:19.659 ************************************ 00:01:19.659 using asan 00:01:19.659 09:00:37 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:01:19.659 00:01:19.659 real 0m0.001s 00:01:19.659 user 0m0.000s 00:01:19.659 sys 0m0.000s 00:01:19.659 09:00:37 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:19.659 09:00:37 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:19.659 ************************************ 00:01:19.659 END TEST asan 00:01:19.659 ************************************ 00:01:19.659 09:00:37 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:19.659 09:00:37 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:19.659 09:00:37 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:19.659 09:00:37 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:19.659 09:00:37 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.659 ************************************ 00:01:19.659 START TEST ubsan 00:01:19.659 ************************************ 00:01:19.659 using ubsan 00:01:19.659 09:00:37 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:19.659 00:01:19.659 real 0m0.000s 00:01:19.659 user 0m0.000s 00:01:19.659 sys 0m0.000s 00:01:19.659 09:00:37 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:19.659 09:00:37 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:19.659 ************************************ 00:01:19.659 END TEST ubsan 00:01:19.659 ************************************ 00:01:19.659 09:00:37 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:19.659 09:00:37 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:19.659 09:00:37 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:19.659 09:00:37 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:19.659 09:00:37 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:19.659 09:00:37 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:19.659 09:00:37 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:19.659 09:00:37 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:19.659 09:00:37 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:19.918 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:19.918 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:20.176 Using 'verbs' RDMA provider 00:01:36.436 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:54.562 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:54.563 Creating mk/config.mk...done. 00:01:54.563 Creating mk/cc.flags.mk...done. 00:01:54.563 Type 'make' to build. 00:01:54.563 09:01:10 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:54.563 09:01:10 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:54.563 09:01:10 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:54.563 09:01:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.563 ************************************ 00:01:54.563 START TEST make 00:01:54.563 ************************************ 00:01:54.563 09:01:10 make -- common/autotest_common.sh@1125 -- $ make -j10 00:01:54.563 make[1]: Nothing to be done for 'all'. 00:02:02.681 The Meson build system 00:02:02.682 Version: 1.5.0 00:02:02.682 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:02.682 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:02.682 Build type: native build 00:02:02.682 Program cat found: YES (/usr/bin/cat) 00:02:02.682 Project name: DPDK 00:02:02.682 Project version: 24.03.0 00:02:02.682 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:02.682 C linker for the host machine: cc ld.bfd 2.40-14 00:02:02.682 Host machine cpu family: x86_64 00:02:02.682 Host machine cpu: x86_64 00:02:02.682 Message: ## Building in Developer Mode ## 00:02:02.682 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:02.682 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:02.682 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:02.682 Program python3 found: YES (/usr/bin/python3) 00:02:02.682 Program cat found: YES (/usr/bin/cat) 00:02:02.682 Compiler for C supports arguments -march=native: YES 00:02:02.682 Checking for size of "void *" : 8 00:02:02.682 Checking for size of "void *" : 8 (cached) 00:02:02.682 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:02.682 Library m found: YES 00:02:02.682 Library numa found: YES 00:02:02.682 Has header "numaif.h" : YES 00:02:02.682 Library fdt found: NO 00:02:02.682 Library execinfo found: NO 00:02:02.682 Has header "execinfo.h" : YES 00:02:02.682 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:02.682 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:02.682 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:02.682 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:02.682 Run-time dependency openssl found: YES 3.1.1 00:02:02.682 Run-time dependency libpcap found: YES 1.10.4 00:02:02.682 Has header "pcap.h" with dependency libpcap: YES 00:02:02.682 Compiler for C supports arguments -Wcast-qual: YES 00:02:02.682 Compiler for C supports arguments -Wdeprecated: YES 00:02:02.682 Compiler for C supports arguments -Wformat: YES 00:02:02.682 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:02.682 Compiler for C supports arguments -Wformat-security: NO 00:02:02.682 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:02.682 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:02.682 Compiler for C supports arguments -Wnested-externs: YES 00:02:02.682 Compiler for C supports arguments -Wold-style-definition: YES 00:02:02.682 Compiler for C supports arguments -Wpointer-arith: YES 00:02:02.682 Compiler for C supports arguments -Wsign-compare: YES 00:02:02.682 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:02.682 Compiler for C supports arguments -Wundef: YES 00:02:02.682 Compiler for C supports arguments -Wwrite-strings: YES 00:02:02.682 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:02.682 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:02.682 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:02.682 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:02.682 Program objdump found: YES (/usr/bin/objdump) 00:02:02.682 Compiler for C supports arguments -mavx512f: YES 00:02:02.682 Checking if "AVX512 checking" compiles: YES 00:02:02.682 Fetching value of define "__SSE4_2__" : 1 00:02:02.682 Fetching value of define "__AES__" : 1 00:02:02.682 Fetching value of define "__AVX__" : 1 00:02:02.682 Fetching value of define "__AVX2__" : 1 00:02:02.682 Fetching value of define "__AVX512BW__" : 1 00:02:02.682 Fetching value of define "__AVX512CD__" : 1 00:02:02.682 Fetching value of define "__AVX512DQ__" : 1 00:02:02.682 Fetching value of define "__AVX512F__" : 1 00:02:02.682 Fetching value of define "__AVX512VL__" : 1 00:02:02.682 Fetching value of define "__PCLMUL__" : 1 00:02:02.682 Fetching value of define "__RDRND__" : 1 00:02:02.682 Fetching value of define "__RDSEED__" : 1 00:02:02.682 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:02.682 Fetching value of define "__znver1__" : (undefined) 00:02:02.682 Fetching value of define "__znver2__" : (undefined) 00:02:02.682 Fetching value of define "__znver3__" : (undefined) 00:02:02.682 Fetching value of define "__znver4__" : (undefined) 00:02:02.682 Library asan found: YES 00:02:02.682 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:02.682 Message: lib/log: Defining dependency "log" 00:02:02.682 Message: lib/kvargs: Defining dependency "kvargs" 00:02:02.682 Message: lib/telemetry: Defining dependency "telemetry" 00:02:02.682 Library rt found: YES 00:02:02.682 Checking for function "getentropy" : NO 00:02:02.682 Message: lib/eal: Defining dependency "eal" 00:02:02.682 Message: lib/ring: Defining dependency "ring" 00:02:02.682 Message: lib/rcu: Defining dependency "rcu" 00:02:02.682 Message: lib/mempool: Defining dependency "mempool" 00:02:02.682 Message: lib/mbuf: Defining dependency "mbuf" 00:02:02.682 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:02.682 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:02.682 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:02.682 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:02.682 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:02.682 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:02.682 Compiler for C supports arguments -mpclmul: YES 00:02:02.682 Compiler for C supports arguments -maes: YES 00:02:02.682 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:02.682 Compiler for C supports arguments -mavx512bw: YES 00:02:02.682 Compiler for C supports arguments -mavx512dq: YES 00:02:02.682 Compiler for C supports arguments -mavx512vl: YES 00:02:02.682 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:02.682 Compiler for C supports arguments -mavx2: YES 00:02:02.682 Compiler for C supports arguments -mavx: YES 00:02:02.682 Message: lib/net: Defining dependency "net" 00:02:02.682 Message: lib/meter: Defining dependency "meter" 00:02:02.682 Message: lib/ethdev: Defining dependency "ethdev" 00:02:02.682 Message: lib/pci: Defining dependency "pci" 00:02:02.682 Message: lib/cmdline: Defining dependency "cmdline" 00:02:02.682 Message: lib/hash: Defining dependency "hash" 00:02:02.682 Message: lib/timer: Defining dependency "timer" 00:02:02.682 Message: lib/compressdev: Defining dependency "compressdev" 00:02:02.682 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:02.682 Message: lib/dmadev: Defining dependency "dmadev" 00:02:02.682 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:02.682 Message: lib/power: Defining dependency "power" 00:02:02.682 Message: lib/reorder: Defining dependency "reorder" 00:02:02.682 Message: lib/security: Defining dependency "security" 00:02:02.682 Has header "linux/userfaultfd.h" : YES 00:02:02.682 Has header "linux/vduse.h" : YES 00:02:02.682 Message: lib/vhost: Defining dependency "vhost" 00:02:02.682 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:02.682 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:02.682 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:02.682 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:02.682 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:02.682 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:02.682 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:02.682 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:02.682 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:02.682 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:02.682 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:02.682 Configuring doxy-api-html.conf using configuration 00:02:02.682 Configuring doxy-api-man.conf using configuration 00:02:02.682 Program mandb found: YES (/usr/bin/mandb) 00:02:02.682 Program sphinx-build found: NO 00:02:02.682 Configuring rte_build_config.h using configuration 00:02:02.682 Message: 00:02:02.682 ================= 00:02:02.682 Applications Enabled 00:02:02.682 ================= 00:02:02.682 00:02:02.682 apps: 00:02:02.682 00:02:02.682 00:02:02.682 Message: 00:02:02.682 ================= 00:02:02.682 Libraries Enabled 00:02:02.682 ================= 00:02:02.682 00:02:02.682 libs: 00:02:02.682 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:02.682 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:02.682 cryptodev, dmadev, power, reorder, security, vhost, 00:02:02.682 00:02:02.683 Message: 00:02:02.683 =============== 00:02:02.683 Drivers Enabled 00:02:02.683 =============== 00:02:02.683 00:02:02.683 common: 00:02:02.683 00:02:02.683 bus: 00:02:02.683 pci, vdev, 00:02:02.683 mempool: 00:02:02.683 ring, 00:02:02.683 dma: 00:02:02.683 00:02:02.683 net: 00:02:02.683 00:02:02.683 crypto: 00:02:02.683 00:02:02.683 compress: 00:02:02.683 00:02:02.683 vdpa: 00:02:02.683 00:02:02.683 00:02:02.683 Message: 00:02:02.683 ================= 00:02:02.683 Content Skipped 00:02:02.683 ================= 00:02:02.683 00:02:02.683 apps: 00:02:02.683 dumpcap: explicitly disabled via build config 00:02:02.683 graph: explicitly disabled via build config 00:02:02.683 pdump: explicitly disabled via build config 00:02:02.683 proc-info: explicitly disabled via build config 00:02:02.683 test-acl: explicitly disabled via build config 00:02:02.683 test-bbdev: explicitly disabled via build config 00:02:02.683 test-cmdline: explicitly disabled via build config 00:02:02.683 test-compress-perf: explicitly disabled via build config 00:02:02.683 test-crypto-perf: explicitly disabled via build config 00:02:02.683 test-dma-perf: explicitly disabled via build config 00:02:02.683 test-eventdev: explicitly disabled via build config 00:02:02.683 test-fib: explicitly disabled via build config 00:02:02.683 test-flow-perf: explicitly disabled via build config 00:02:02.683 test-gpudev: explicitly disabled via build config 00:02:02.683 test-mldev: explicitly disabled via build config 00:02:02.683 test-pipeline: explicitly disabled via build config 00:02:02.683 test-pmd: explicitly disabled via build config 00:02:02.683 test-regex: explicitly disabled via build config 00:02:02.683 test-sad: explicitly disabled via build config 00:02:02.683 test-security-perf: explicitly disabled via build config 00:02:02.683 00:02:02.683 libs: 00:02:02.683 argparse: explicitly disabled via build config 00:02:02.683 metrics: explicitly disabled via build config 00:02:02.683 acl: explicitly disabled via build config 00:02:02.683 bbdev: explicitly disabled via build config 00:02:02.683 bitratestats: explicitly disabled via build config 00:02:02.683 bpf: explicitly disabled via build config 00:02:02.683 cfgfile: explicitly disabled via build config 00:02:02.683 distributor: explicitly disabled via build config 00:02:02.683 efd: explicitly disabled via build config 00:02:02.683 eventdev: explicitly disabled via build config 00:02:02.683 dispatcher: explicitly disabled via build config 00:02:02.683 gpudev: explicitly disabled via build config 00:02:02.683 gro: explicitly disabled via build config 00:02:02.683 gso: explicitly disabled via build config 00:02:02.683 ip_frag: explicitly disabled via build config 00:02:02.683 jobstats: explicitly disabled via build config 00:02:02.683 latencystats: explicitly disabled via build config 00:02:02.683 lpm: explicitly disabled via build config 00:02:02.683 member: explicitly disabled via build config 00:02:02.683 pcapng: explicitly disabled via build config 00:02:02.683 rawdev: explicitly disabled via build config 00:02:02.683 regexdev: explicitly disabled via build config 00:02:02.683 mldev: explicitly disabled via build config 00:02:02.683 rib: explicitly disabled via build config 00:02:02.683 sched: explicitly disabled via build config 00:02:02.683 stack: explicitly disabled via build config 00:02:02.683 ipsec: explicitly disabled via build config 00:02:02.683 pdcp: explicitly disabled via build config 00:02:02.683 fib: explicitly disabled via build config 00:02:02.683 port: explicitly disabled via build config 00:02:02.683 pdump: explicitly disabled via build config 00:02:02.683 table: explicitly disabled via build config 00:02:02.683 pipeline: explicitly disabled via build config 00:02:02.683 graph: explicitly disabled via build config 00:02:02.683 node: explicitly disabled via build config 00:02:02.683 00:02:02.683 drivers: 00:02:02.683 common/cpt: not in enabled drivers build config 00:02:02.683 common/dpaax: not in enabled drivers build config 00:02:02.683 common/iavf: not in enabled drivers build config 00:02:02.683 common/idpf: not in enabled drivers build config 00:02:02.683 common/ionic: not in enabled drivers build config 00:02:02.683 common/mvep: not in enabled drivers build config 00:02:02.683 common/octeontx: not in enabled drivers build config 00:02:02.683 bus/auxiliary: not in enabled drivers build config 00:02:02.683 bus/cdx: not in enabled drivers build config 00:02:02.683 bus/dpaa: not in enabled drivers build config 00:02:02.683 bus/fslmc: not in enabled drivers build config 00:02:02.683 bus/ifpga: not in enabled drivers build config 00:02:02.683 bus/platform: not in enabled drivers build config 00:02:02.683 bus/uacce: not in enabled drivers build config 00:02:02.683 bus/vmbus: not in enabled drivers build config 00:02:02.683 common/cnxk: not in enabled drivers build config 00:02:02.683 common/mlx5: not in enabled drivers build config 00:02:02.683 common/nfp: not in enabled drivers build config 00:02:02.683 common/nitrox: not in enabled drivers build config 00:02:02.683 common/qat: not in enabled drivers build config 00:02:02.683 common/sfc_efx: not in enabled drivers build config 00:02:02.683 mempool/bucket: not in enabled drivers build config 00:02:02.683 mempool/cnxk: not in enabled drivers build config 00:02:02.683 mempool/dpaa: not in enabled drivers build config 00:02:02.683 mempool/dpaa2: not in enabled drivers build config 00:02:02.683 mempool/octeontx: not in enabled drivers build config 00:02:02.683 mempool/stack: not in enabled drivers build config 00:02:02.683 dma/cnxk: not in enabled drivers build config 00:02:02.683 dma/dpaa: not in enabled drivers build config 00:02:02.683 dma/dpaa2: not in enabled drivers build config 00:02:02.683 dma/hisilicon: not in enabled drivers build config 00:02:02.683 dma/idxd: not in enabled drivers build config 00:02:02.683 dma/ioat: not in enabled drivers build config 00:02:02.683 dma/skeleton: not in enabled drivers build config 00:02:02.683 net/af_packet: not in enabled drivers build config 00:02:02.683 net/af_xdp: not in enabled drivers build config 00:02:02.683 net/ark: not in enabled drivers build config 00:02:02.683 net/atlantic: not in enabled drivers build config 00:02:02.683 net/avp: not in enabled drivers build config 00:02:02.683 net/axgbe: not in enabled drivers build config 00:02:02.683 net/bnx2x: not in enabled drivers build config 00:02:02.683 net/bnxt: not in enabled drivers build config 00:02:02.683 net/bonding: not in enabled drivers build config 00:02:02.683 net/cnxk: not in enabled drivers build config 00:02:02.683 net/cpfl: not in enabled drivers build config 00:02:02.683 net/cxgbe: not in enabled drivers build config 00:02:02.683 net/dpaa: not in enabled drivers build config 00:02:02.683 net/dpaa2: not in enabled drivers build config 00:02:02.683 net/e1000: not in enabled drivers build config 00:02:02.683 net/ena: not in enabled drivers build config 00:02:02.683 net/enetc: not in enabled drivers build config 00:02:02.683 net/enetfec: not in enabled drivers build config 00:02:02.683 net/enic: not in enabled drivers build config 00:02:02.683 net/failsafe: not in enabled drivers build config 00:02:02.683 net/fm10k: not in enabled drivers build config 00:02:02.683 net/gve: not in enabled drivers build config 00:02:02.683 net/hinic: not in enabled drivers build config 00:02:02.683 net/hns3: not in enabled drivers build config 00:02:02.683 net/i40e: not in enabled drivers build config 00:02:02.683 net/iavf: not in enabled drivers build config 00:02:02.683 net/ice: not in enabled drivers build config 00:02:02.683 net/idpf: not in enabled drivers build config 00:02:02.683 net/igc: not in enabled drivers build config 00:02:02.683 net/ionic: not in enabled drivers build config 00:02:02.683 net/ipn3ke: not in enabled drivers build config 00:02:02.683 net/ixgbe: not in enabled drivers build config 00:02:02.683 net/mana: not in enabled drivers build config 00:02:02.683 net/memif: not in enabled drivers build config 00:02:02.683 net/mlx4: not in enabled drivers build config 00:02:02.683 net/mlx5: not in enabled drivers build config 00:02:02.683 net/mvneta: not in enabled drivers build config 00:02:02.683 net/mvpp2: not in enabled drivers build config 00:02:02.683 net/netvsc: not in enabled drivers build config 00:02:02.683 net/nfb: not in enabled drivers build config 00:02:02.683 net/nfp: not in enabled drivers build config 00:02:02.683 net/ngbe: not in enabled drivers build config 00:02:02.683 net/null: not in enabled drivers build config 00:02:02.683 net/octeontx: not in enabled drivers build config 00:02:02.683 net/octeon_ep: not in enabled drivers build config 00:02:02.683 net/pcap: not in enabled drivers build config 00:02:02.683 net/pfe: not in enabled drivers build config 00:02:02.683 net/qede: not in enabled drivers build config 00:02:02.683 net/ring: not in enabled drivers build config 00:02:02.683 net/sfc: not in enabled drivers build config 00:02:02.683 net/softnic: not in enabled drivers build config 00:02:02.683 net/tap: not in enabled drivers build config 00:02:02.683 net/thunderx: not in enabled drivers build config 00:02:02.683 net/txgbe: not in enabled drivers build config 00:02:02.683 net/vdev_netvsc: not in enabled drivers build config 00:02:02.683 net/vhost: not in enabled drivers build config 00:02:02.683 net/virtio: not in enabled drivers build config 00:02:02.683 net/vmxnet3: not in enabled drivers build config 00:02:02.683 raw/*: missing internal dependency, "rawdev" 00:02:02.683 crypto/armv8: not in enabled drivers build config 00:02:02.683 crypto/bcmfs: not in enabled drivers build config 00:02:02.683 crypto/caam_jr: not in enabled drivers build config 00:02:02.683 crypto/ccp: not in enabled drivers build config 00:02:02.683 crypto/cnxk: not in enabled drivers build config 00:02:02.683 crypto/dpaa_sec: not in enabled drivers build config 00:02:02.683 crypto/dpaa2_sec: not in enabled drivers build config 00:02:02.683 crypto/ipsec_mb: not in enabled drivers build config 00:02:02.683 crypto/mlx5: not in enabled drivers build config 00:02:02.683 crypto/mvsam: not in enabled drivers build config 00:02:02.683 crypto/nitrox: not in enabled drivers build config 00:02:02.683 crypto/null: not in enabled drivers build config 00:02:02.683 crypto/octeontx: not in enabled drivers build config 00:02:02.683 crypto/openssl: not in enabled drivers build config 00:02:02.683 crypto/scheduler: not in enabled drivers build config 00:02:02.683 crypto/uadk: not in enabled drivers build config 00:02:02.683 crypto/virtio: not in enabled drivers build config 00:02:02.683 compress/isal: not in enabled drivers build config 00:02:02.684 compress/mlx5: not in enabled drivers build config 00:02:02.684 compress/nitrox: not in enabled drivers build config 00:02:02.684 compress/octeontx: not in enabled drivers build config 00:02:02.684 compress/zlib: not in enabled drivers build config 00:02:02.684 regex/*: missing internal dependency, "regexdev" 00:02:02.684 ml/*: missing internal dependency, "mldev" 00:02:02.684 vdpa/ifc: not in enabled drivers build config 00:02:02.684 vdpa/mlx5: not in enabled drivers build config 00:02:02.684 vdpa/nfp: not in enabled drivers build config 00:02:02.684 vdpa/sfc: not in enabled drivers build config 00:02:02.684 event/*: missing internal dependency, "eventdev" 00:02:02.684 baseband/*: missing internal dependency, "bbdev" 00:02:02.684 gpu/*: missing internal dependency, "gpudev" 00:02:02.684 00:02:02.684 00:02:02.684 Build targets in project: 85 00:02:02.684 00:02:02.684 DPDK 24.03.0 00:02:02.684 00:02:02.684 User defined options 00:02:02.684 buildtype : debug 00:02:02.684 default_library : shared 00:02:02.684 libdir : lib 00:02:02.684 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:02.684 b_sanitize : address 00:02:02.684 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:02.684 c_link_args : 00:02:02.684 cpu_instruction_set: native 00:02:02.684 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:02.684 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:02.684 enable_docs : false 00:02:02.684 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:02.684 enable_kmods : false 00:02:02.684 max_lcores : 128 00:02:02.684 tests : false 00:02:02.684 00:02:02.684 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:02.943 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:02.943 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:02.943 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:02.943 [3/268] Linking static target lib/librte_log.a 00:02:02.943 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:02.943 [5/268] Linking static target lib/librte_kvargs.a 00:02:02.943 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:03.202 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:03.461 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:03.461 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:03.461 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:03.461 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:03.461 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:03.461 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:03.461 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:03.461 [15/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.461 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:03.461 [17/268] Linking static target lib/librte_telemetry.a 00:02:03.720 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:03.720 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.720 [20/268] Linking target lib/librte_log.so.24.1 00:02:03.980 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:03.980 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:03.980 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:03.980 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:03.980 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:03.980 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:03.980 [27/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:03.980 [28/268] Linking target lib/librte_kvargs.so.24.1 00:02:04.310 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:04.310 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:04.310 [31/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:04.310 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.310 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:04.310 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:04.310 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:04.569 [36/268] Linking target lib/librte_telemetry.so.24.1 00:02:04.569 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:04.569 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:04.569 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:04.569 [40/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:04.569 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:04.569 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:04.829 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:04.829 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:04.829 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:04.829 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:04.829 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:05.088 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:05.088 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:05.088 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:05.088 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:05.347 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:05.347 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:05.347 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:05.347 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:05.347 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:05.347 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:05.605 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:05.605 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:05.606 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:05.606 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:05.606 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:05.606 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:05.864 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:05.864 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:05.864 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:05.864 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:06.123 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:06.123 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:06.123 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:06.123 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:06.123 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:06.382 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:06.382 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:06.382 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:06.382 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:06.382 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:06.382 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:06.640 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:06.640 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:06.640 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:06.640 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:06.640 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:06.899 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:06.899 [85/268] Linking static target lib/librte_eal.a 00:02:06.899 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:06.899 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:06.899 [88/268] Linking static target lib/librte_ring.a 00:02:07.157 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:07.157 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:07.157 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:07.157 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:07.157 [93/268] Linking static target lib/librte_mempool.a 00:02:07.157 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:07.416 [95/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:07.416 [96/268] Linking static target lib/librte_rcu.a 00:02:07.416 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:07.416 [98/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.675 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:07.675 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:07.675 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:07.675 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:07.675 [103/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:07.675 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:07.675 [105/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.937 [106/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:07.937 [107/268] Linking static target lib/librte_net.a 00:02:07.937 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:07.937 [109/268] Linking static target lib/librte_mbuf.a 00:02:07.937 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:07.937 [111/268] Linking static target lib/librte_meter.a 00:02:08.196 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:08.196 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:08.196 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:08.196 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.455 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:08.455 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.455 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.714 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:08.714 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:08.973 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:08.973 [122/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.973 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:09.233 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:09.233 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:09.233 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:09.233 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:09.233 [128/268] Linking static target lib/librte_pci.a 00:02:09.233 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:09.233 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:09.493 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:09.493 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:09.493 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:09.493 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:09.493 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:09.493 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:09.493 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:09.493 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:09.493 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:09.753 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:09.753 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:09.753 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:09.753 [143/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.753 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:09.753 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:09.753 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:09.753 [147/268] Linking static target lib/librte_cmdline.a 00:02:10.012 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:10.012 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:10.012 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:10.012 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:10.012 [152/268] Linking static target lib/librte_timer.a 00:02:10.012 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:10.271 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:10.271 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:10.530 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:10.530 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:10.530 [158/268] Linking static target lib/librte_ethdev.a 00:02:10.530 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:10.530 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:10.789 [161/268] Linking static target lib/librte_compressdev.a 00:02:10.789 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:10.789 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.789 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:11.048 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:11.048 [166/268] Linking static target lib/librte_dmadev.a 00:02:11.048 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:11.048 [168/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:11.048 [169/268] Linking static target lib/librte_hash.a 00:02:11.048 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:11.048 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:11.308 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:11.308 [173/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:11.308 [174/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.567 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.567 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:11.827 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:11.827 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:11.827 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.827 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:11.827 [181/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:11.827 [182/268] Linking static target lib/librte_cryptodev.a 00:02:11.827 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:12.086 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:12.086 [185/268] Linking static target lib/librte_power.a 00:02:12.086 [186/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.086 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:12.086 [188/268] Linking static target lib/librte_reorder.a 00:02:12.346 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:12.346 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:12.346 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:12.346 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:12.346 [193/268] Linking static target lib/librte_security.a 00:02:12.914 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.914 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:13.174 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.174 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.174 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:13.174 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:13.174 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:13.433 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:13.692 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:13.692 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:13.692 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:13.692 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:13.952 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:13.952 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:13.952 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:13.952 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:13.952 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:14.212 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.212 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:14.212 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:14.212 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:14.212 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:14.212 [216/268] Linking static target drivers/librte_bus_vdev.a 00:02:14.212 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:14.212 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:14.212 [219/268] Linking static target drivers/librte_bus_pci.a 00:02:14.212 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:14.212 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:14.472 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:14.472 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:14.472 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:14.472 [225/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.472 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:14.731 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.666 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:17.569 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.569 [230/268] Linking target lib/librte_eal.so.24.1 00:02:17.569 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:17.569 [232/268] Linking target lib/librte_ring.so.24.1 00:02:17.569 [233/268] Linking target lib/librte_dmadev.so.24.1 00:02:17.569 [234/268] Linking target lib/librte_pci.so.24.1 00:02:17.569 [235/268] Linking target lib/librte_timer.so.24.1 00:02:17.569 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:17.569 [237/268] Linking target lib/librte_meter.so.24.1 00:02:17.828 [238/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:17.828 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:17.828 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:17.828 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:17.828 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:17.828 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:17.828 [244/268] Linking target lib/librte_mempool.so.24.1 00:02:17.828 [245/268] Linking target lib/librte_rcu.so.24.1 00:02:17.828 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:17.828 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:18.087 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:18.087 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:18.087 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:18.087 [251/268] Linking target lib/librte_net.so.24.1 00:02:18.087 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:18.087 [253/268] Linking target lib/librte_compressdev.so.24.1 00:02:18.087 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:18.345 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:18.345 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:18.345 [257/268] Linking target lib/librte_cmdline.so.24.1 00:02:18.345 [258/268] Linking target lib/librte_hash.so.24.1 00:02:18.345 [259/268] Linking target lib/librte_security.so.24.1 00:02:18.345 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:18.913 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.913 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:19.171 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:19.171 [264/268] Linking target lib/librte_power.so.24.1 00:02:19.431 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:19.431 [266/268] Linking static target lib/librte_vhost.a 00:02:21.966 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.966 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:21.966 INFO: autodetecting backend as ninja 00:02:21.966 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:40.069 CC lib/ut/ut.o 00:02:40.069 CC lib/ut_mock/mock.o 00:02:40.069 CC lib/log/log.o 00:02:40.069 CC lib/log/log_flags.o 00:02:40.069 CC lib/log/log_deprecated.o 00:02:40.069 LIB libspdk_ut.a 00:02:40.069 LIB libspdk_log.a 00:02:40.069 LIB libspdk_ut_mock.a 00:02:40.069 SO libspdk_ut.so.2.0 00:02:40.069 SO libspdk_ut_mock.so.6.0 00:02:40.069 SO libspdk_log.so.7.1 00:02:40.069 SYMLINK libspdk_ut.so 00:02:40.069 SYMLINK libspdk_ut_mock.so 00:02:40.069 SYMLINK libspdk_log.so 00:02:40.069 CXX lib/trace_parser/trace.o 00:02:40.069 CC lib/dma/dma.o 00:02:40.069 CC lib/util/bit_array.o 00:02:40.069 CC lib/util/base64.o 00:02:40.069 CC lib/util/cpuset.o 00:02:40.069 CC lib/util/crc16.o 00:02:40.069 CC lib/util/crc32.o 00:02:40.069 CC lib/util/crc32c.o 00:02:40.069 CC lib/ioat/ioat.o 00:02:40.069 CC lib/vfio_user/host/vfio_user_pci.o 00:02:40.069 CC lib/util/crc32_ieee.o 00:02:40.069 CC lib/util/crc64.o 00:02:40.069 CC lib/vfio_user/host/vfio_user.o 00:02:40.069 CC lib/util/dif.o 00:02:40.069 LIB libspdk_dma.a 00:02:40.069 CC lib/util/fd.o 00:02:40.069 CC lib/util/fd_group.o 00:02:40.069 SO libspdk_dma.so.5.0 00:02:40.069 CC lib/util/file.o 00:02:40.069 CC lib/util/hexlify.o 00:02:40.069 SYMLINK libspdk_dma.so 00:02:40.069 CC lib/util/iov.o 00:02:40.069 LIB libspdk_ioat.a 00:02:40.069 SO libspdk_ioat.so.7.0 00:02:40.069 CC lib/util/math.o 00:02:40.069 CC lib/util/net.o 00:02:40.069 LIB libspdk_vfio_user.a 00:02:40.069 CC lib/util/pipe.o 00:02:40.069 SYMLINK libspdk_ioat.so 00:02:40.069 CC lib/util/strerror_tls.o 00:02:40.069 CC lib/util/string.o 00:02:40.069 SO libspdk_vfio_user.so.5.0 00:02:40.069 CC lib/util/uuid.o 00:02:40.069 SYMLINK libspdk_vfio_user.so 00:02:40.069 CC lib/util/xor.o 00:02:40.070 CC lib/util/zipf.o 00:02:40.070 CC lib/util/md5.o 00:02:40.328 LIB libspdk_util.a 00:02:40.588 SO libspdk_util.so.10.1 00:02:40.588 LIB libspdk_trace_parser.a 00:02:40.588 SYMLINK libspdk_util.so 00:02:40.588 SO libspdk_trace_parser.so.6.0 00:02:40.847 SYMLINK libspdk_trace_parser.so 00:02:40.847 CC lib/env_dpdk/env.o 00:02:40.847 CC lib/env_dpdk/memory.o 00:02:40.847 CC lib/env_dpdk/pci.o 00:02:40.847 CC lib/env_dpdk/init.o 00:02:40.847 CC lib/vmd/vmd.o 00:02:40.847 CC lib/conf/conf.o 00:02:40.847 CC lib/idxd/idxd.o 00:02:40.847 CC lib/json/json_parse.o 00:02:40.847 CC lib/rdma_utils/rdma_utils.o 00:02:40.847 CC lib/rdma_provider/common.o 00:02:40.847 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:41.106 LIB libspdk_conf.a 00:02:41.106 SO libspdk_conf.so.6.0 00:02:41.106 CC lib/json/json_util.o 00:02:41.106 SYMLINK libspdk_conf.so 00:02:41.106 LIB libspdk_rdma_utils.a 00:02:41.106 CC lib/json/json_write.o 00:02:41.106 SO libspdk_rdma_utils.so.1.0 00:02:41.106 CC lib/env_dpdk/threads.o 00:02:41.106 LIB libspdk_rdma_provider.a 00:02:41.106 CC lib/env_dpdk/pci_ioat.o 00:02:41.106 SYMLINK libspdk_rdma_utils.so 00:02:41.106 CC lib/env_dpdk/pci_virtio.o 00:02:41.106 SO libspdk_rdma_provider.so.6.0 00:02:41.106 SYMLINK libspdk_rdma_provider.so 00:02:41.106 CC lib/env_dpdk/pci_vmd.o 00:02:41.106 CC lib/env_dpdk/pci_idxd.o 00:02:41.364 CC lib/env_dpdk/pci_event.o 00:02:41.364 CC lib/env_dpdk/sigbus_handler.o 00:02:41.364 CC lib/env_dpdk/pci_dpdk.o 00:02:41.364 LIB libspdk_json.a 00:02:41.364 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:41.364 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:41.364 SO libspdk_json.so.6.0 00:02:41.364 CC lib/idxd/idxd_user.o 00:02:41.364 CC lib/idxd/idxd_kernel.o 00:02:41.364 CC lib/vmd/led.o 00:02:41.364 SYMLINK libspdk_json.so 00:02:41.624 LIB libspdk_vmd.a 00:02:41.624 SO libspdk_vmd.so.6.0 00:02:41.624 LIB libspdk_idxd.a 00:02:41.624 CC lib/jsonrpc/jsonrpc_server.o 00:02:41.624 SYMLINK libspdk_vmd.so 00:02:41.624 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:41.624 CC lib/jsonrpc/jsonrpc_client.o 00:02:41.624 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:41.624 SO libspdk_idxd.so.12.1 00:02:41.883 SYMLINK libspdk_idxd.so 00:02:41.883 LIB libspdk_jsonrpc.a 00:02:42.142 SO libspdk_jsonrpc.so.6.0 00:02:42.142 SYMLINK libspdk_jsonrpc.so 00:02:42.401 LIB libspdk_env_dpdk.a 00:02:42.401 SO libspdk_env_dpdk.so.15.1 00:02:42.401 CC lib/rpc/rpc.o 00:02:42.661 SYMLINK libspdk_env_dpdk.so 00:02:42.661 LIB libspdk_rpc.a 00:02:42.661 SO libspdk_rpc.so.6.0 00:02:42.921 SYMLINK libspdk_rpc.so 00:02:43.180 CC lib/trace/trace.o 00:02:43.180 CC lib/trace/trace_flags.o 00:02:43.180 CC lib/trace/trace_rpc.o 00:02:43.180 CC lib/keyring/keyring.o 00:02:43.180 CC lib/keyring/keyring_rpc.o 00:02:43.180 CC lib/notify/notify.o 00:02:43.180 CC lib/notify/notify_rpc.o 00:02:43.439 LIB libspdk_notify.a 00:02:43.439 SO libspdk_notify.so.6.0 00:02:43.439 LIB libspdk_keyring.a 00:02:43.439 LIB libspdk_trace.a 00:02:43.439 SYMLINK libspdk_notify.so 00:02:43.439 SO libspdk_keyring.so.2.0 00:02:43.439 SO libspdk_trace.so.11.0 00:02:43.439 SYMLINK libspdk_keyring.so 00:02:43.439 SYMLINK libspdk_trace.so 00:02:44.007 CC lib/sock/sock.o 00:02:44.007 CC lib/sock/sock_rpc.o 00:02:44.007 CC lib/thread/thread.o 00:02:44.007 CC lib/thread/iobuf.o 00:02:44.267 LIB libspdk_sock.a 00:02:44.267 SO libspdk_sock.so.10.0 00:02:44.527 SYMLINK libspdk_sock.so 00:02:44.786 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:44.786 CC lib/nvme/nvme_ctrlr.o 00:02:44.786 CC lib/nvme/nvme_fabric.o 00:02:44.786 CC lib/nvme/nvme_ns_cmd.o 00:02:44.786 CC lib/nvme/nvme_ns.o 00:02:44.786 CC lib/nvme/nvme_pcie_common.o 00:02:44.786 CC lib/nvme/nvme_pcie.o 00:02:44.786 CC lib/nvme/nvme.o 00:02:44.786 CC lib/nvme/nvme_qpair.o 00:02:45.354 LIB libspdk_thread.a 00:02:45.613 SO libspdk_thread.so.10.2 00:02:45.613 CC lib/nvme/nvme_quirks.o 00:02:45.613 CC lib/nvme/nvme_transport.o 00:02:45.613 CC lib/nvme/nvme_discovery.o 00:02:45.613 SYMLINK libspdk_thread.so 00:02:45.613 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:45.613 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:45.613 CC lib/nvme/nvme_tcp.o 00:02:45.613 CC lib/nvme/nvme_opal.o 00:02:45.872 CC lib/nvme/nvme_io_msg.o 00:02:45.872 CC lib/nvme/nvme_poll_group.o 00:02:45.872 CC lib/nvme/nvme_zns.o 00:02:46.131 CC lib/nvme/nvme_stubs.o 00:02:46.131 CC lib/nvme/nvme_auth.o 00:02:46.131 CC lib/nvme/nvme_cuse.o 00:02:46.131 CC lib/accel/accel.o 00:02:46.390 CC lib/nvme/nvme_rdma.o 00:02:46.390 CC lib/blob/blobstore.o 00:02:46.649 CC lib/init/json_config.o 00:02:46.649 CC lib/fsdev/fsdev.o 00:02:46.649 CC lib/virtio/virtio.o 00:02:46.908 CC lib/init/subsystem.o 00:02:46.908 CC lib/virtio/virtio_vhost_user.o 00:02:46.908 CC lib/virtio/virtio_vfio_user.o 00:02:46.908 CC lib/init/subsystem_rpc.o 00:02:47.168 CC lib/init/rpc.o 00:02:47.168 CC lib/accel/accel_rpc.o 00:02:47.168 CC lib/accel/accel_sw.o 00:02:47.168 CC lib/virtio/virtio_pci.o 00:02:47.168 CC lib/blob/request.o 00:02:47.168 LIB libspdk_init.a 00:02:47.168 CC lib/fsdev/fsdev_io.o 00:02:47.168 SO libspdk_init.so.6.0 00:02:47.168 CC lib/fsdev/fsdev_rpc.o 00:02:47.168 CC lib/blob/zeroes.o 00:02:47.427 SYMLINK libspdk_init.so 00:02:47.427 CC lib/blob/blob_bs_dev.o 00:02:47.427 LIB libspdk_virtio.a 00:02:47.427 LIB libspdk_accel.a 00:02:47.427 SO libspdk_virtio.so.7.0 00:02:47.427 SO libspdk_accel.so.16.0 00:02:47.427 CC lib/event/app.o 00:02:47.427 CC lib/event/reactor.o 00:02:47.427 CC lib/event/app_rpc.o 00:02:47.427 CC lib/event/log_rpc.o 00:02:47.427 CC lib/event/scheduler_static.o 00:02:47.427 SYMLINK libspdk_virtio.so 00:02:47.427 SYMLINK libspdk_accel.so 00:02:47.689 LIB libspdk_fsdev.a 00:02:47.689 SO libspdk_fsdev.so.1.0 00:02:47.689 LIB libspdk_nvme.a 00:02:47.689 SYMLINK libspdk_fsdev.so 00:02:47.689 CC lib/bdev/bdev.o 00:02:47.689 CC lib/bdev/bdev_rpc.o 00:02:47.689 CC lib/bdev/bdev_zone.o 00:02:47.689 CC lib/bdev/part.o 00:02:47.689 CC lib/bdev/scsi_nvme.o 00:02:47.962 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:47.962 SO libspdk_nvme.so.15.0 00:02:47.962 LIB libspdk_event.a 00:02:47.962 SO libspdk_event.so.14.0 00:02:47.962 SYMLINK libspdk_nvme.so 00:02:48.234 SYMLINK libspdk_event.so 00:02:48.494 LIB libspdk_fuse_dispatcher.a 00:02:48.494 SO libspdk_fuse_dispatcher.so.1.0 00:02:48.494 SYMLINK libspdk_fuse_dispatcher.so 00:02:49.874 LIB libspdk_blob.a 00:02:49.874 SO libspdk_blob.so.11.0 00:02:49.874 SYMLINK libspdk_blob.so 00:02:50.442 CC lib/blobfs/blobfs.o 00:02:50.442 CC lib/blobfs/tree.o 00:02:50.442 CC lib/lvol/lvol.o 00:02:50.442 LIB libspdk_bdev.a 00:02:50.442 SO libspdk_bdev.so.17.0 00:02:50.701 SYMLINK libspdk_bdev.so 00:02:50.701 CC lib/nbd/nbd.o 00:02:50.701 CC lib/nbd/nbd_rpc.o 00:02:50.701 CC lib/ublk/ublk_rpc.o 00:02:50.701 CC lib/ublk/ublk.o 00:02:50.701 CC lib/nvmf/ctrlr.o 00:02:50.701 CC lib/nvmf/ctrlr_discovery.o 00:02:50.701 CC lib/ftl/ftl_core.o 00:02:50.960 CC lib/scsi/dev.o 00:02:50.960 CC lib/nvmf/ctrlr_bdev.o 00:02:50.960 CC lib/scsi/lun.o 00:02:50.960 CC lib/scsi/port.o 00:02:51.220 LIB libspdk_blobfs.a 00:02:51.220 SO libspdk_blobfs.so.10.0 00:02:51.220 SYMLINK libspdk_blobfs.so 00:02:51.220 CC lib/scsi/scsi.o 00:02:51.220 CC lib/ftl/ftl_init.o 00:02:51.220 CC lib/ftl/ftl_layout.o 00:02:51.220 LIB libspdk_nbd.a 00:02:51.220 CC lib/scsi/scsi_bdev.o 00:02:51.220 LIB libspdk_lvol.a 00:02:51.220 SO libspdk_nbd.so.7.0 00:02:51.220 SO libspdk_lvol.so.10.0 00:02:51.220 SYMLINK libspdk_nbd.so 00:02:51.220 CC lib/scsi/scsi_pr.o 00:02:51.220 CC lib/scsi/scsi_rpc.o 00:02:51.220 SYMLINK libspdk_lvol.so 00:02:51.220 CC lib/ftl/ftl_debug.o 00:02:51.220 CC lib/scsi/task.o 00:02:51.479 CC lib/ftl/ftl_io.o 00:02:51.479 CC lib/ftl/ftl_sb.o 00:02:51.479 LIB libspdk_ublk.a 00:02:51.479 SO libspdk_ublk.so.3.0 00:02:51.479 SYMLINK libspdk_ublk.so 00:02:51.479 CC lib/nvmf/subsystem.o 00:02:51.479 CC lib/nvmf/nvmf.o 00:02:51.479 CC lib/ftl/ftl_l2p.o 00:02:51.479 CC lib/ftl/ftl_l2p_flat.o 00:02:51.737 CC lib/ftl/ftl_nv_cache.o 00:02:51.737 CC lib/ftl/ftl_band.o 00:02:51.737 CC lib/ftl/ftl_band_ops.o 00:02:51.737 CC lib/nvmf/nvmf_rpc.o 00:02:51.737 CC lib/nvmf/transport.o 00:02:51.737 CC lib/nvmf/tcp.o 00:02:51.737 LIB libspdk_scsi.a 00:02:51.738 SO libspdk_scsi.so.9.0 00:02:51.997 SYMLINK libspdk_scsi.so 00:02:51.997 CC lib/nvmf/stubs.o 00:02:51.997 CC lib/ftl/ftl_writer.o 00:02:51.997 CC lib/ftl/ftl_rq.o 00:02:52.256 CC lib/nvmf/mdns_server.o 00:02:52.256 CC lib/ftl/ftl_reloc.o 00:02:52.256 CC lib/nvmf/rdma.o 00:02:52.515 CC lib/nvmf/auth.o 00:02:52.515 CC lib/ftl/ftl_l2p_cache.o 00:02:52.515 CC lib/iscsi/conn.o 00:02:52.515 CC lib/iscsi/init_grp.o 00:02:52.516 CC lib/iscsi/iscsi.o 00:02:52.775 CC lib/ftl/ftl_p2l.o 00:02:52.775 CC lib/ftl/ftl_p2l_log.o 00:02:52.775 CC lib/iscsi/param.o 00:02:53.034 CC lib/vhost/vhost.o 00:02:53.034 CC lib/ftl/mngt/ftl_mngt.o 00:02:53.034 CC lib/vhost/vhost_rpc.o 00:02:53.034 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:53.294 CC lib/iscsi/portal_grp.o 00:02:53.294 CC lib/iscsi/tgt_node.o 00:02:53.294 CC lib/iscsi/iscsi_subsystem.o 00:02:53.294 CC lib/iscsi/iscsi_rpc.o 00:02:53.294 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:53.294 CC lib/iscsi/task.o 00:02:53.554 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:53.554 CC lib/vhost/vhost_scsi.o 00:02:53.554 CC lib/vhost/vhost_blk.o 00:02:53.554 CC lib/vhost/rte_vhost_user.o 00:02:53.554 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:53.554 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:53.554 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:53.554 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:53.813 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:53.813 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:53.813 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:53.813 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:53.813 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:54.073 CC lib/ftl/utils/ftl_conf.o 00:02:54.073 CC lib/ftl/utils/ftl_md.o 00:02:54.073 CC lib/ftl/utils/ftl_mempool.o 00:02:54.073 CC lib/ftl/utils/ftl_bitmap.o 00:02:54.073 LIB libspdk_iscsi.a 00:02:54.073 CC lib/ftl/utils/ftl_property.o 00:02:54.332 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:54.332 SO libspdk_iscsi.so.8.0 00:02:54.332 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:54.332 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:54.332 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:54.332 SYMLINK libspdk_iscsi.so 00:02:54.332 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:54.332 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:54.332 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:54.332 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:54.592 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:54.592 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:54.592 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:54.592 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:54.592 LIB libspdk_vhost.a 00:02:54.592 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:54.592 CC lib/ftl/base/ftl_base_dev.o 00:02:54.592 CC lib/ftl/base/ftl_base_bdev.o 00:02:54.592 SO libspdk_vhost.so.8.0 00:02:54.592 CC lib/ftl/ftl_trace.o 00:02:54.850 SYMLINK libspdk_vhost.so 00:02:54.850 LIB libspdk_nvmf.a 00:02:54.850 LIB libspdk_ftl.a 00:02:54.850 SO libspdk_nvmf.so.19.0 00:02:55.109 SYMLINK libspdk_nvmf.so 00:02:55.109 SO libspdk_ftl.so.9.0 00:02:55.369 SYMLINK libspdk_ftl.so 00:02:55.938 CC module/env_dpdk/env_dpdk_rpc.o 00:02:55.938 CC module/blob/bdev/blob_bdev.o 00:02:55.938 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:55.938 CC module/scheduler/gscheduler/gscheduler.o 00:02:55.938 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:55.938 CC module/fsdev/aio/fsdev_aio.o 00:02:55.938 CC module/keyring/file/keyring.o 00:02:55.938 CC module/sock/posix/posix.o 00:02:55.938 CC module/accel/error/accel_error.o 00:02:55.938 CC module/keyring/linux/keyring.o 00:02:55.938 LIB libspdk_env_dpdk_rpc.a 00:02:55.938 SO libspdk_env_dpdk_rpc.so.6.0 00:02:55.938 LIB libspdk_scheduler_dpdk_governor.a 00:02:55.938 SYMLINK libspdk_env_dpdk_rpc.so 00:02:55.938 CC module/accel/error/accel_error_rpc.o 00:02:55.938 CC module/keyring/linux/keyring_rpc.o 00:02:55.938 LIB libspdk_scheduler_gscheduler.a 00:02:55.938 CC module/keyring/file/keyring_rpc.o 00:02:55.938 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:56.197 SO libspdk_scheduler_gscheduler.so.4.0 00:02:56.197 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:56.197 LIB libspdk_scheduler_dynamic.a 00:02:56.197 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:56.197 SO libspdk_scheduler_dynamic.so.4.0 00:02:56.197 SYMLINK libspdk_scheduler_gscheduler.so 00:02:56.197 LIB libspdk_blob_bdev.a 00:02:56.197 LIB libspdk_keyring_file.a 00:02:56.197 LIB libspdk_keyring_linux.a 00:02:56.197 SO libspdk_blob_bdev.so.11.0 00:02:56.197 LIB libspdk_accel_error.a 00:02:56.197 SYMLINK libspdk_scheduler_dynamic.so 00:02:56.197 CC module/fsdev/aio/linux_aio_mgr.o 00:02:56.197 SO libspdk_keyring_file.so.2.0 00:02:56.197 SO libspdk_keyring_linux.so.1.0 00:02:56.197 SO libspdk_accel_error.so.2.0 00:02:56.197 SYMLINK libspdk_blob_bdev.so 00:02:56.197 SYMLINK libspdk_keyring_file.so 00:02:56.197 SYMLINK libspdk_keyring_linux.so 00:02:56.197 SYMLINK libspdk_accel_error.so 00:02:56.197 CC module/accel/ioat/accel_ioat.o 00:02:56.197 CC module/accel/ioat/accel_ioat_rpc.o 00:02:56.456 CC module/accel/dsa/accel_dsa.o 00:02:56.456 CC module/accel/dsa/accel_dsa_rpc.o 00:02:56.456 CC module/accel/iaa/accel_iaa.o 00:02:56.456 CC module/accel/iaa/accel_iaa_rpc.o 00:02:56.456 LIB libspdk_accel_ioat.a 00:02:56.456 CC module/bdev/delay/vbdev_delay.o 00:02:56.456 CC module/blobfs/bdev/blobfs_bdev.o 00:02:56.456 SO libspdk_accel_ioat.so.6.0 00:02:56.456 CC module/bdev/error/vbdev_error.o 00:02:56.456 CC module/bdev/error/vbdev_error_rpc.o 00:02:56.716 SYMLINK libspdk_accel_ioat.so 00:02:56.716 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:56.716 LIB libspdk_accel_iaa.a 00:02:56.716 CC module/bdev/gpt/gpt.o 00:02:56.716 LIB libspdk_fsdev_aio.a 00:02:56.716 LIB libspdk_accel_dsa.a 00:02:56.716 SO libspdk_accel_iaa.so.3.0 00:02:56.716 SO libspdk_fsdev_aio.so.1.0 00:02:56.716 SO libspdk_accel_dsa.so.5.0 00:02:56.716 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:56.716 SYMLINK libspdk_accel_iaa.so 00:02:56.716 CC module/bdev/gpt/vbdev_gpt.o 00:02:56.716 SYMLINK libspdk_accel_dsa.so 00:02:56.716 SYMLINK libspdk_fsdev_aio.so 00:02:56.716 LIB libspdk_sock_posix.a 00:02:56.716 SO libspdk_sock_posix.so.6.0 00:02:56.716 LIB libspdk_bdev_error.a 00:02:56.716 LIB libspdk_blobfs_bdev.a 00:02:56.976 SO libspdk_bdev_error.so.6.0 00:02:56.976 SO libspdk_blobfs_bdev.so.6.0 00:02:56.976 SYMLINK libspdk_sock_posix.so 00:02:56.976 LIB libspdk_bdev_delay.a 00:02:56.976 CC module/bdev/lvol/vbdev_lvol.o 00:02:56.976 CC module/bdev/malloc/bdev_malloc.o 00:02:56.976 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:56.976 CC module/bdev/null/bdev_null.o 00:02:56.976 CC module/bdev/nvme/bdev_nvme.o 00:02:56.976 SYMLINK libspdk_blobfs_bdev.so 00:02:56.976 SO libspdk_bdev_delay.so.6.0 00:02:56.976 SYMLINK libspdk_bdev_error.so 00:02:56.976 CC module/bdev/null/bdev_null_rpc.o 00:02:56.976 CC module/bdev/passthru/vbdev_passthru.o 00:02:56.976 LIB libspdk_bdev_gpt.a 00:02:56.976 SYMLINK libspdk_bdev_delay.so 00:02:56.976 SO libspdk_bdev_gpt.so.6.0 00:02:56.976 SYMLINK libspdk_bdev_gpt.so 00:02:56.976 CC module/bdev/raid/bdev_raid.o 00:02:56.976 CC module/bdev/split/vbdev_split.o 00:02:57.236 CC module/bdev/raid/bdev_raid_rpc.o 00:02:57.236 LIB libspdk_bdev_null.a 00:02:57.236 SO libspdk_bdev_null.so.6.0 00:02:57.236 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:57.236 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:57.236 SYMLINK libspdk_bdev_null.so 00:02:57.236 CC module/bdev/raid/bdev_raid_sb.o 00:02:57.236 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:57.236 CC module/bdev/split/vbdev_split_rpc.o 00:02:57.496 LIB libspdk_bdev_passthru.a 00:02:57.496 LIB libspdk_bdev_lvol.a 00:02:57.496 CC module/bdev/aio/bdev_aio.o 00:02:57.496 SO libspdk_bdev_lvol.so.6.0 00:02:57.496 SO libspdk_bdev_passthru.so.6.0 00:02:57.496 LIB libspdk_bdev_malloc.a 00:02:57.496 SO libspdk_bdev_malloc.so.6.0 00:02:57.496 CC module/bdev/ftl/bdev_ftl.o 00:02:57.496 SYMLINK libspdk_bdev_passthru.so 00:02:57.496 SYMLINK libspdk_bdev_lvol.so 00:02:57.496 CC module/bdev/aio/bdev_aio_rpc.o 00:02:57.496 CC module/bdev/raid/raid0.o 00:02:57.496 LIB libspdk_bdev_split.a 00:02:57.496 SO libspdk_bdev_split.so.6.0 00:02:57.496 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:57.496 SYMLINK libspdk_bdev_malloc.so 00:02:57.496 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:57.496 SYMLINK libspdk_bdev_split.so 00:02:57.755 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:57.755 CC module/bdev/iscsi/bdev_iscsi.o 00:02:57.755 CC module/bdev/nvme/nvme_rpc.o 00:02:57.755 LIB libspdk_bdev_zone_block.a 00:02:57.755 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:57.755 LIB libspdk_bdev_aio.a 00:02:57.755 SO libspdk_bdev_zone_block.so.6.0 00:02:57.755 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:57.755 SO libspdk_bdev_aio.so.6.0 00:02:57.755 LIB libspdk_bdev_ftl.a 00:02:57.755 SYMLINK libspdk_bdev_zone_block.so 00:02:57.755 SO libspdk_bdev_ftl.so.6.0 00:02:57.755 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:57.755 SYMLINK libspdk_bdev_aio.so 00:02:57.755 CC module/bdev/nvme/bdev_mdns_client.o 00:02:57.755 SYMLINK libspdk_bdev_ftl.so 00:02:57.755 CC module/bdev/raid/raid1.o 00:02:58.014 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:58.014 CC module/bdev/nvme/vbdev_opal.o 00:02:58.014 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:58.014 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:58.014 CC module/bdev/raid/concat.o 00:02:58.014 LIB libspdk_bdev_iscsi.a 00:02:58.014 CC module/bdev/raid/raid5f.o 00:02:58.273 SO libspdk_bdev_iscsi.so.6.0 00:02:58.273 SYMLINK libspdk_bdev_iscsi.so 00:02:58.273 LIB libspdk_bdev_virtio.a 00:02:58.273 SO libspdk_bdev_virtio.so.6.0 00:02:58.273 SYMLINK libspdk_bdev_virtio.so 00:02:58.532 LIB libspdk_bdev_raid.a 00:02:58.791 SO libspdk_bdev_raid.so.6.0 00:02:58.791 SYMLINK libspdk_bdev_raid.so 00:02:59.359 LIB libspdk_bdev_nvme.a 00:02:59.359 SO libspdk_bdev_nvme.so.7.0 00:02:59.618 SYMLINK libspdk_bdev_nvme.so 00:03:00.187 CC module/event/subsystems/scheduler/scheduler.o 00:03:00.187 CC module/event/subsystems/iobuf/iobuf.o 00:03:00.187 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:00.187 CC module/event/subsystems/sock/sock.o 00:03:00.187 CC module/event/subsystems/vmd/vmd.o 00:03:00.187 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:00.187 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:00.187 CC module/event/subsystems/keyring/keyring.o 00:03:00.187 CC module/event/subsystems/fsdev/fsdev.o 00:03:00.187 LIB libspdk_event_vhost_blk.a 00:03:00.187 LIB libspdk_event_scheduler.a 00:03:00.187 LIB libspdk_event_keyring.a 00:03:00.187 LIB libspdk_event_vmd.a 00:03:00.187 LIB libspdk_event_iobuf.a 00:03:00.187 LIB libspdk_event_fsdev.a 00:03:00.187 LIB libspdk_event_sock.a 00:03:00.187 SO libspdk_event_vhost_blk.so.3.0 00:03:00.188 SO libspdk_event_scheduler.so.4.0 00:03:00.188 SO libspdk_event_keyring.so.1.0 00:03:00.188 SO libspdk_event_sock.so.5.0 00:03:00.188 SO libspdk_event_iobuf.so.3.0 00:03:00.188 SO libspdk_event_fsdev.so.1.0 00:03:00.188 SO libspdk_event_vmd.so.6.0 00:03:00.188 SYMLINK libspdk_event_vhost_blk.so 00:03:00.188 SYMLINK libspdk_event_keyring.so 00:03:00.188 SYMLINK libspdk_event_scheduler.so 00:03:00.188 SYMLINK libspdk_event_fsdev.so 00:03:00.188 SYMLINK libspdk_event_sock.so 00:03:00.188 SYMLINK libspdk_event_vmd.so 00:03:00.447 SYMLINK libspdk_event_iobuf.so 00:03:00.706 CC module/event/subsystems/accel/accel.o 00:03:00.965 LIB libspdk_event_accel.a 00:03:00.965 SO libspdk_event_accel.so.6.0 00:03:00.965 SYMLINK libspdk_event_accel.so 00:03:01.533 CC module/event/subsystems/bdev/bdev.o 00:03:01.533 LIB libspdk_event_bdev.a 00:03:01.533 SO libspdk_event_bdev.so.6.0 00:03:01.792 SYMLINK libspdk_event_bdev.so 00:03:02.051 CC module/event/subsystems/scsi/scsi.o 00:03:02.051 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:02.051 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:02.051 CC module/event/subsystems/ublk/ublk.o 00:03:02.051 CC module/event/subsystems/nbd/nbd.o 00:03:02.310 LIB libspdk_event_nbd.a 00:03:02.310 LIB libspdk_event_scsi.a 00:03:02.310 LIB libspdk_event_ublk.a 00:03:02.310 SO libspdk_event_scsi.so.6.0 00:03:02.310 SO libspdk_event_nbd.so.6.0 00:03:02.310 SO libspdk_event_ublk.so.3.0 00:03:02.310 SYMLINK libspdk_event_scsi.so 00:03:02.310 LIB libspdk_event_nvmf.a 00:03:02.310 SYMLINK libspdk_event_nbd.so 00:03:02.310 SYMLINK libspdk_event_ublk.so 00:03:02.310 SO libspdk_event_nvmf.so.6.0 00:03:02.310 SYMLINK libspdk_event_nvmf.so 00:03:02.569 CC module/event/subsystems/iscsi/iscsi.o 00:03:02.569 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:02.831 LIB libspdk_event_vhost_scsi.a 00:03:02.831 LIB libspdk_event_iscsi.a 00:03:02.831 SO libspdk_event_vhost_scsi.so.3.0 00:03:02.831 SO libspdk_event_iscsi.so.6.0 00:03:02.831 SYMLINK libspdk_event_vhost_scsi.so 00:03:02.831 SYMLINK libspdk_event_iscsi.so 00:03:03.092 SO libspdk.so.6.0 00:03:03.092 SYMLINK libspdk.so 00:03:03.361 CC app/trace_record/trace_record.o 00:03:03.361 CXX app/trace/trace.o 00:03:03.361 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:03.636 CC app/iscsi_tgt/iscsi_tgt.o 00:03:03.636 CC app/nvmf_tgt/nvmf_main.o 00:03:03.636 CC examples/ioat/perf/perf.o 00:03:03.636 CC test/thread/poller_perf/poller_perf.o 00:03:03.636 CC examples/util/zipf/zipf.o 00:03:03.636 CC test/dma/test_dma/test_dma.o 00:03:03.636 CC test/app/bdev_svc/bdev_svc.o 00:03:03.636 LINK interrupt_tgt 00:03:03.636 LINK nvmf_tgt 00:03:03.637 LINK poller_perf 00:03:03.637 LINK iscsi_tgt 00:03:03.637 LINK zipf 00:03:03.637 LINK spdk_trace_record 00:03:03.637 LINK ioat_perf 00:03:03.637 LINK bdev_svc 00:03:03.896 LINK spdk_trace 00:03:03.896 CC examples/ioat/verify/verify.o 00:03:03.896 CC app/spdk_lspci/spdk_lspci.o 00:03:03.896 CC examples/vmd/lsvmd/lsvmd.o 00:03:03.896 CC examples/sock/hello_world/hello_sock.o 00:03:03.896 CC examples/thread/thread/thread_ex.o 00:03:04.155 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:04.155 CC app/spdk_tgt/spdk_tgt.o 00:03:04.155 CC examples/idxd/perf/perf.o 00:03:04.155 LINK test_dma 00:03:04.155 LINK spdk_lspci 00:03:04.155 LINK verify 00:03:04.155 CC examples/vmd/led/led.o 00:03:04.155 LINK lsvmd 00:03:04.155 LINK spdk_tgt 00:03:04.155 LINK hello_sock 00:03:04.155 LINK led 00:03:04.155 LINK thread 00:03:04.415 TEST_HEADER include/spdk/accel.h 00:03:04.415 TEST_HEADER include/spdk/accel_module.h 00:03:04.415 TEST_HEADER include/spdk/assert.h 00:03:04.415 TEST_HEADER include/spdk/barrier.h 00:03:04.415 TEST_HEADER include/spdk/base64.h 00:03:04.415 TEST_HEADER include/spdk/bdev.h 00:03:04.415 TEST_HEADER include/spdk/bdev_module.h 00:03:04.415 TEST_HEADER include/spdk/bdev_zone.h 00:03:04.415 TEST_HEADER include/spdk/bit_array.h 00:03:04.415 TEST_HEADER include/spdk/bit_pool.h 00:03:04.415 TEST_HEADER include/spdk/blob_bdev.h 00:03:04.415 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:04.415 TEST_HEADER include/spdk/blobfs.h 00:03:04.415 TEST_HEADER include/spdk/blob.h 00:03:04.415 TEST_HEADER include/spdk/conf.h 00:03:04.415 TEST_HEADER include/spdk/config.h 00:03:04.415 TEST_HEADER include/spdk/cpuset.h 00:03:04.415 TEST_HEADER include/spdk/crc16.h 00:03:04.415 TEST_HEADER include/spdk/crc32.h 00:03:04.415 TEST_HEADER include/spdk/crc64.h 00:03:04.415 TEST_HEADER include/spdk/dif.h 00:03:04.415 TEST_HEADER include/spdk/dma.h 00:03:04.415 TEST_HEADER include/spdk/endian.h 00:03:04.415 TEST_HEADER include/spdk/env_dpdk.h 00:03:04.415 CC test/app/histogram_perf/histogram_perf.o 00:03:04.415 TEST_HEADER include/spdk/env.h 00:03:04.415 TEST_HEADER include/spdk/event.h 00:03:04.415 TEST_HEADER include/spdk/fd_group.h 00:03:04.415 TEST_HEADER include/spdk/fd.h 00:03:04.415 TEST_HEADER include/spdk/file.h 00:03:04.415 TEST_HEADER include/spdk/fsdev.h 00:03:04.415 TEST_HEADER include/spdk/fsdev_module.h 00:03:04.415 TEST_HEADER include/spdk/ftl.h 00:03:04.415 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:04.415 TEST_HEADER include/spdk/gpt_spec.h 00:03:04.415 TEST_HEADER include/spdk/hexlify.h 00:03:04.415 TEST_HEADER include/spdk/histogram_data.h 00:03:04.415 TEST_HEADER include/spdk/idxd.h 00:03:04.415 TEST_HEADER include/spdk/idxd_spec.h 00:03:04.415 TEST_HEADER include/spdk/init.h 00:03:04.415 TEST_HEADER include/spdk/ioat.h 00:03:04.415 TEST_HEADER include/spdk/ioat_spec.h 00:03:04.415 TEST_HEADER include/spdk/iscsi_spec.h 00:03:04.415 TEST_HEADER include/spdk/json.h 00:03:04.415 TEST_HEADER include/spdk/jsonrpc.h 00:03:04.415 TEST_HEADER include/spdk/keyring.h 00:03:04.415 TEST_HEADER include/spdk/keyring_module.h 00:03:04.415 TEST_HEADER include/spdk/likely.h 00:03:04.415 LINK idxd_perf 00:03:04.415 TEST_HEADER include/spdk/log.h 00:03:04.415 TEST_HEADER include/spdk/lvol.h 00:03:04.415 TEST_HEADER include/spdk/md5.h 00:03:04.415 TEST_HEADER include/spdk/memory.h 00:03:04.415 TEST_HEADER include/spdk/mmio.h 00:03:04.415 TEST_HEADER include/spdk/nbd.h 00:03:04.415 TEST_HEADER include/spdk/net.h 00:03:04.415 TEST_HEADER include/spdk/notify.h 00:03:04.415 TEST_HEADER include/spdk/nvme.h 00:03:04.415 TEST_HEADER include/spdk/nvme_intel.h 00:03:04.415 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:04.415 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:04.415 TEST_HEADER include/spdk/nvme_spec.h 00:03:04.415 CC test/event/event_perf/event_perf.o 00:03:04.415 TEST_HEADER include/spdk/nvme_zns.h 00:03:04.415 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:04.415 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:04.415 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:04.415 TEST_HEADER include/spdk/nvmf.h 00:03:04.415 TEST_HEADER include/spdk/nvmf_spec.h 00:03:04.415 LINK nvme_fuzz 00:03:04.415 TEST_HEADER include/spdk/nvmf_transport.h 00:03:04.415 TEST_HEADER include/spdk/opal.h 00:03:04.415 TEST_HEADER include/spdk/opal_spec.h 00:03:04.415 TEST_HEADER include/spdk/pci_ids.h 00:03:04.415 TEST_HEADER include/spdk/pipe.h 00:03:04.415 CC test/env/mem_callbacks/mem_callbacks.o 00:03:04.415 TEST_HEADER include/spdk/queue.h 00:03:04.415 TEST_HEADER include/spdk/reduce.h 00:03:04.415 TEST_HEADER include/spdk/rpc.h 00:03:04.415 TEST_HEADER include/spdk/scheduler.h 00:03:04.415 TEST_HEADER include/spdk/scsi.h 00:03:04.415 TEST_HEADER include/spdk/scsi_spec.h 00:03:04.415 TEST_HEADER include/spdk/sock.h 00:03:04.415 TEST_HEADER include/spdk/stdinc.h 00:03:04.415 CC app/spdk_nvme_perf/perf.o 00:03:04.415 TEST_HEADER include/spdk/string.h 00:03:04.415 LINK histogram_perf 00:03:04.415 TEST_HEADER include/spdk/thread.h 00:03:04.415 TEST_HEADER include/spdk/trace.h 00:03:04.415 TEST_HEADER include/spdk/trace_parser.h 00:03:04.415 TEST_HEADER include/spdk/tree.h 00:03:04.415 TEST_HEADER include/spdk/ublk.h 00:03:04.415 TEST_HEADER include/spdk/util.h 00:03:04.415 CC app/spdk_nvme_identify/identify.o 00:03:04.415 TEST_HEADER include/spdk/uuid.h 00:03:04.415 TEST_HEADER include/spdk/version.h 00:03:04.415 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:04.415 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:04.415 TEST_HEADER include/spdk/vhost.h 00:03:04.415 TEST_HEADER include/spdk/vmd.h 00:03:04.415 TEST_HEADER include/spdk/xor.h 00:03:04.415 TEST_HEADER include/spdk/zipf.h 00:03:04.415 CXX test/cpp_headers/accel.o 00:03:04.675 CC app/spdk_nvme_discover/discovery_aer.o 00:03:04.675 LINK event_perf 00:03:04.675 CXX test/cpp_headers/accel_module.o 00:03:04.675 CC examples/nvme/hello_world/hello_world.o 00:03:04.675 CC test/app/jsoncat/jsoncat.o 00:03:04.675 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:04.675 CC test/event/reactor/reactor.o 00:03:04.934 LINK spdk_nvme_discover 00:03:04.934 LINK jsoncat 00:03:04.934 CXX test/cpp_headers/assert.o 00:03:04.934 LINK reactor 00:03:04.934 LINK hello_world 00:03:04.934 LINK mem_callbacks 00:03:04.934 CXX test/cpp_headers/barrier.o 00:03:04.934 LINK hello_fsdev 00:03:04.934 CC test/env/vtophys/vtophys.o 00:03:04.934 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:05.194 CXX test/cpp_headers/base64.o 00:03:05.194 CC test/event/reactor_perf/reactor_perf.o 00:03:05.194 CC examples/nvme/reconnect/reconnect.o 00:03:05.194 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:05.194 LINK vtophys 00:03:05.194 LINK env_dpdk_post_init 00:03:05.194 CXX test/cpp_headers/bdev.o 00:03:05.194 LINK reactor_perf 00:03:05.194 CC test/app/stub/stub.o 00:03:05.453 LINK spdk_nvme_perf 00:03:05.453 CC examples/nvme/arbitration/arbitration.o 00:03:05.453 CXX test/cpp_headers/bdev_module.o 00:03:05.453 CC test/env/memory/memory_ut.o 00:03:05.453 LINK spdk_nvme_identify 00:03:05.453 LINK stub 00:03:05.453 LINK reconnect 00:03:05.453 CC test/event/app_repeat/app_repeat.o 00:03:05.453 CC test/env/pci/pci_ut.o 00:03:05.713 CXX test/cpp_headers/bdev_zone.o 00:03:05.713 LINK app_repeat 00:03:05.713 CC app/spdk_top/spdk_top.o 00:03:05.713 LINK nvme_manage 00:03:05.713 CC examples/nvme/hotplug/hotplug.o 00:03:05.713 LINK arbitration 00:03:05.713 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:05.713 CXX test/cpp_headers/bit_array.o 00:03:05.973 LINK cmb_copy 00:03:05.973 CC test/event/scheduler/scheduler.o 00:03:05.973 CXX test/cpp_headers/bit_pool.o 00:03:05.973 LINK hotplug 00:03:05.973 LINK pci_ut 00:03:05.973 CC examples/accel/perf/accel_perf.o 00:03:05.973 CXX test/cpp_headers/blob_bdev.o 00:03:05.973 CC examples/blob/hello_world/hello_blob.o 00:03:06.233 LINK scheduler 00:03:06.233 CC examples/nvme/abort/abort.o 00:03:06.233 CC app/vhost/vhost.o 00:03:06.233 CXX test/cpp_headers/blobfs_bdev.o 00:03:06.233 LINK iscsi_fuzz 00:03:06.233 LINK hello_blob 00:03:06.233 CC app/spdk_dd/spdk_dd.o 00:03:06.233 LINK vhost 00:03:06.492 CXX test/cpp_headers/blobfs.o 00:03:06.492 CC app/fio/nvme/fio_plugin.o 00:03:06.492 LINK memory_ut 00:03:06.492 LINK accel_perf 00:03:06.492 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:06.492 LINK abort 00:03:06.492 CXX test/cpp_headers/blob.o 00:03:06.492 CXX test/cpp_headers/conf.o 00:03:06.492 CC examples/blob/cli/blobcli.o 00:03:06.492 LINK spdk_top 00:03:06.752 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:06.752 LINK spdk_dd 00:03:06.752 CXX test/cpp_headers/config.o 00:03:06.752 CXX test/cpp_headers/cpuset.o 00:03:06.752 CC test/rpc_client/rpc_client_test.o 00:03:06.752 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:06.752 CC test/nvme/aer/aer.o 00:03:06.752 CXX test/cpp_headers/crc16.o 00:03:07.012 CC test/accel/dif/dif.o 00:03:07.012 LINK pmr_persistence 00:03:07.012 LINK rpc_client_test 00:03:07.012 CC test/blobfs/mkfs/mkfs.o 00:03:07.012 LINK spdk_nvme 00:03:07.012 CXX test/cpp_headers/crc32.o 00:03:07.012 LINK vhost_fuzz 00:03:07.012 LINK blobcli 00:03:07.012 CC test/lvol/esnap/esnap.o 00:03:07.012 CXX test/cpp_headers/crc64.o 00:03:07.012 LINK aer 00:03:07.271 LINK mkfs 00:03:07.271 CC test/nvme/reset/reset.o 00:03:07.271 CC app/fio/bdev/fio_plugin.o 00:03:07.271 CC test/nvme/sgl/sgl.o 00:03:07.271 CC test/nvme/e2edp/nvme_dp.o 00:03:07.271 CXX test/cpp_headers/dif.o 00:03:07.271 CC test/nvme/overhead/overhead.o 00:03:07.531 CC examples/bdev/hello_world/hello_bdev.o 00:03:07.531 CXX test/cpp_headers/dma.o 00:03:07.531 LINK reset 00:03:07.531 CC examples/bdev/bdevperf/bdevperf.o 00:03:07.531 LINK sgl 00:03:07.531 LINK nvme_dp 00:03:07.531 CXX test/cpp_headers/endian.o 00:03:07.531 LINK hello_bdev 00:03:07.531 LINK dif 00:03:07.531 LINK overhead 00:03:07.790 LINK spdk_bdev 00:03:07.790 CC test/nvme/err_injection/err_injection.o 00:03:07.790 CXX test/cpp_headers/env_dpdk.o 00:03:07.790 CC test/nvme/startup/startup.o 00:03:07.790 CC test/nvme/reserve/reserve.o 00:03:07.790 LINK err_injection 00:03:07.790 CC test/nvme/simple_copy/simple_copy.o 00:03:07.790 CXX test/cpp_headers/env.o 00:03:07.790 CC test/nvme/connect_stress/connect_stress.o 00:03:07.790 LINK startup 00:03:07.790 CC test/nvme/boot_partition/boot_partition.o 00:03:08.050 LINK reserve 00:03:08.050 CXX test/cpp_headers/event.o 00:03:08.050 CC test/bdev/bdevio/bdevio.o 00:03:08.050 LINK connect_stress 00:03:08.050 LINK boot_partition 00:03:08.050 CC test/nvme/compliance/nvme_compliance.o 00:03:08.050 LINK simple_copy 00:03:08.050 CC test/nvme/fused_ordering/fused_ordering.o 00:03:08.309 CXX test/cpp_headers/fd_group.o 00:03:08.309 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:08.309 CXX test/cpp_headers/fd.o 00:03:08.309 CC test/nvme/fdp/fdp.o 00:03:08.309 CC test/nvme/cuse/cuse.o 00:03:08.309 LINK fused_ordering 00:03:08.309 CXX test/cpp_headers/file.o 00:03:08.309 LINK bdevperf 00:03:08.309 CXX test/cpp_headers/fsdev.o 00:03:08.309 LINK doorbell_aers 00:03:08.569 LINK bdevio 00:03:08.569 CXX test/cpp_headers/fsdev_module.o 00:03:08.569 LINK nvme_compliance 00:03:08.569 CXX test/cpp_headers/ftl.o 00:03:08.569 CXX test/cpp_headers/fuse_dispatcher.o 00:03:08.569 CXX test/cpp_headers/gpt_spec.o 00:03:08.569 LINK fdp 00:03:08.569 CXX test/cpp_headers/hexlify.o 00:03:08.569 CXX test/cpp_headers/histogram_data.o 00:03:08.569 CXX test/cpp_headers/idxd.o 00:03:08.569 CXX test/cpp_headers/idxd_spec.o 00:03:08.569 CXX test/cpp_headers/init.o 00:03:08.569 CXX test/cpp_headers/ioat.o 00:03:08.828 CC examples/nvmf/nvmf/nvmf.o 00:03:08.828 CXX test/cpp_headers/ioat_spec.o 00:03:08.828 CXX test/cpp_headers/iscsi_spec.o 00:03:08.828 CXX test/cpp_headers/json.o 00:03:08.828 CXX test/cpp_headers/jsonrpc.o 00:03:08.828 CXX test/cpp_headers/keyring.o 00:03:08.828 CXX test/cpp_headers/keyring_module.o 00:03:08.828 CXX test/cpp_headers/likely.o 00:03:08.828 CXX test/cpp_headers/log.o 00:03:08.828 CXX test/cpp_headers/lvol.o 00:03:08.828 CXX test/cpp_headers/md5.o 00:03:08.828 CXX test/cpp_headers/memory.o 00:03:08.828 CXX test/cpp_headers/mmio.o 00:03:09.087 CXX test/cpp_headers/nbd.o 00:03:09.087 CXX test/cpp_headers/net.o 00:03:09.087 CXX test/cpp_headers/notify.o 00:03:09.087 CXX test/cpp_headers/nvme.o 00:03:09.087 LINK nvmf 00:03:09.087 CXX test/cpp_headers/nvme_intel.o 00:03:09.087 CXX test/cpp_headers/nvme_ocssd.o 00:03:09.087 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:09.087 CXX test/cpp_headers/nvme_spec.o 00:03:09.087 CXX test/cpp_headers/nvme_zns.o 00:03:09.087 CXX test/cpp_headers/nvmf_cmd.o 00:03:09.087 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:09.087 CXX test/cpp_headers/nvmf.o 00:03:09.087 CXX test/cpp_headers/nvmf_spec.o 00:03:09.346 CXX test/cpp_headers/nvmf_transport.o 00:03:09.346 CXX test/cpp_headers/opal.o 00:03:09.346 CXX test/cpp_headers/opal_spec.o 00:03:09.346 CXX test/cpp_headers/pci_ids.o 00:03:09.346 CXX test/cpp_headers/pipe.o 00:03:09.346 CXX test/cpp_headers/queue.o 00:03:09.346 CXX test/cpp_headers/reduce.o 00:03:09.346 CXX test/cpp_headers/rpc.o 00:03:09.346 CXX test/cpp_headers/scheduler.o 00:03:09.346 CXX test/cpp_headers/scsi.o 00:03:09.346 CXX test/cpp_headers/scsi_spec.o 00:03:09.346 CXX test/cpp_headers/sock.o 00:03:09.346 CXX test/cpp_headers/stdinc.o 00:03:09.346 CXX test/cpp_headers/string.o 00:03:09.605 CXX test/cpp_headers/thread.o 00:03:09.605 CXX test/cpp_headers/trace.o 00:03:09.605 CXX test/cpp_headers/trace_parser.o 00:03:09.605 LINK cuse 00:03:09.605 CXX test/cpp_headers/tree.o 00:03:09.605 CXX test/cpp_headers/ublk.o 00:03:09.605 CXX test/cpp_headers/util.o 00:03:09.605 CXX test/cpp_headers/uuid.o 00:03:09.605 CXX test/cpp_headers/version.o 00:03:09.605 CXX test/cpp_headers/vfio_user_pci.o 00:03:09.605 CXX test/cpp_headers/vfio_user_spec.o 00:03:09.605 CXX test/cpp_headers/vhost.o 00:03:09.605 CXX test/cpp_headers/vmd.o 00:03:09.606 CXX test/cpp_headers/xor.o 00:03:09.606 CXX test/cpp_headers/zipf.o 00:03:12.901 LINK esnap 00:03:12.901 00:03:12.901 real 1m20.606s 00:03:12.901 user 6m52.040s 00:03:12.901 sys 1m38.620s 00:03:12.901 09:02:30 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:12.901 09:02:30 make -- common/autotest_common.sh@10 -- $ set +x 00:03:12.901 ************************************ 00:03:12.901 END TEST make 00:03:12.901 ************************************ 00:03:12.901 09:02:30 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:12.901 09:02:30 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:12.901 09:02:30 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:12.901 09:02:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:12.901 09:02:30 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:12.901 09:02:30 -- pm/common@44 -- $ pid=5453 00:03:12.901 09:02:30 -- pm/common@50 -- $ kill -TERM 5453 00:03:12.901 09:02:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:12.901 09:02:30 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:12.901 09:02:30 -- pm/common@44 -- $ pid=5455 00:03:12.901 09:02:30 -- pm/common@50 -- $ kill -TERM 5455 00:03:13.161 09:02:30 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:13.161 09:02:30 -- common/autotest_common.sh@1691 -- # lcov --version 00:03:13.161 09:02:30 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:13.161 09:02:30 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:13.161 09:02:30 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:13.161 09:02:30 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:13.161 09:02:30 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:13.161 09:02:30 -- scripts/common.sh@336 -- # IFS=.-: 00:03:13.161 09:02:30 -- scripts/common.sh@336 -- # read -ra ver1 00:03:13.161 09:02:30 -- scripts/common.sh@337 -- # IFS=.-: 00:03:13.161 09:02:30 -- scripts/common.sh@337 -- # read -ra ver2 00:03:13.161 09:02:30 -- scripts/common.sh@338 -- # local 'op=<' 00:03:13.161 09:02:30 -- scripts/common.sh@340 -- # ver1_l=2 00:03:13.161 09:02:30 -- scripts/common.sh@341 -- # ver2_l=1 00:03:13.161 09:02:30 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:13.161 09:02:30 -- scripts/common.sh@344 -- # case "$op" in 00:03:13.161 09:02:30 -- scripts/common.sh@345 -- # : 1 00:03:13.161 09:02:30 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:13.161 09:02:30 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:13.161 09:02:30 -- scripts/common.sh@365 -- # decimal 1 00:03:13.161 09:02:30 -- scripts/common.sh@353 -- # local d=1 00:03:13.161 09:02:30 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:13.161 09:02:30 -- scripts/common.sh@355 -- # echo 1 00:03:13.161 09:02:30 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:13.161 09:02:30 -- scripts/common.sh@366 -- # decimal 2 00:03:13.161 09:02:30 -- scripts/common.sh@353 -- # local d=2 00:03:13.161 09:02:30 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:13.161 09:02:30 -- scripts/common.sh@355 -- # echo 2 00:03:13.161 09:02:30 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:13.161 09:02:30 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:13.162 09:02:30 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:13.162 09:02:30 -- scripts/common.sh@368 -- # return 0 00:03:13.162 09:02:30 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:13.162 09:02:30 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:13.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:13.162 --rc genhtml_branch_coverage=1 00:03:13.162 --rc genhtml_function_coverage=1 00:03:13.162 --rc genhtml_legend=1 00:03:13.162 --rc geninfo_all_blocks=1 00:03:13.162 --rc geninfo_unexecuted_blocks=1 00:03:13.162 00:03:13.162 ' 00:03:13.162 09:02:30 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:13.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:13.162 --rc genhtml_branch_coverage=1 00:03:13.162 --rc genhtml_function_coverage=1 00:03:13.162 --rc genhtml_legend=1 00:03:13.162 --rc geninfo_all_blocks=1 00:03:13.162 --rc geninfo_unexecuted_blocks=1 00:03:13.162 00:03:13.162 ' 00:03:13.162 09:02:30 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:13.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:13.162 --rc genhtml_branch_coverage=1 00:03:13.162 --rc genhtml_function_coverage=1 00:03:13.162 --rc genhtml_legend=1 00:03:13.162 --rc geninfo_all_blocks=1 00:03:13.162 --rc geninfo_unexecuted_blocks=1 00:03:13.162 00:03:13.162 ' 00:03:13.162 09:02:30 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:13.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:13.162 --rc genhtml_branch_coverage=1 00:03:13.162 --rc genhtml_function_coverage=1 00:03:13.162 --rc genhtml_legend=1 00:03:13.162 --rc geninfo_all_blocks=1 00:03:13.162 --rc geninfo_unexecuted_blocks=1 00:03:13.162 00:03:13.162 ' 00:03:13.162 09:02:30 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:13.162 09:02:30 -- nvmf/common.sh@7 -- # uname -s 00:03:13.162 09:02:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:13.162 09:02:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:13.162 09:02:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:13.162 09:02:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:13.162 09:02:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:13.162 09:02:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:13.162 09:02:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:13.162 09:02:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:13.162 09:02:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:13.162 09:02:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:13.162 09:02:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d8dac9db-f9af-4c2d-89de-4790b63e0fa6 00:03:13.162 09:02:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=d8dac9db-f9af-4c2d-89de-4790b63e0fa6 00:03:13.162 09:02:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:13.162 09:02:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:13.162 09:02:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:13.162 09:02:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:13.162 09:02:30 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:13.162 09:02:30 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:13.162 09:02:31 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:13.162 09:02:31 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:13.162 09:02:31 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:13.162 09:02:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:13.162 09:02:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:13.162 09:02:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:13.162 09:02:31 -- paths/export.sh@5 -- # export PATH 00:03:13.162 09:02:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:13.162 09:02:31 -- nvmf/common.sh@51 -- # : 0 00:03:13.162 09:02:31 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:13.162 09:02:31 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:13.162 09:02:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:13.162 09:02:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:13.162 09:02:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:13.162 09:02:31 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:13.162 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:13.162 09:02:31 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:13.162 09:02:31 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:13.162 09:02:31 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:13.162 09:02:31 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:13.162 09:02:31 -- spdk/autotest.sh@32 -- # uname -s 00:03:13.162 09:02:31 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:13.162 09:02:31 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:13.162 09:02:31 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:13.162 09:02:31 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:13.162 09:02:31 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:13.162 09:02:31 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:13.422 09:02:31 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:13.422 09:02:31 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:13.422 09:02:31 -- spdk/autotest.sh@48 -- # udevadm_pid=54378 00:03:13.422 09:02:31 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:13.422 09:02:31 -- pm/common@17 -- # local monitor 00:03:13.422 09:02:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:13.422 09:02:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:13.422 09:02:31 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:13.422 09:02:31 -- pm/common@25 -- # sleep 1 00:03:13.422 09:02:31 -- pm/common@21 -- # date +%s 00:03:13.422 09:02:31 -- pm/common@21 -- # date +%s 00:03:13.422 09:02:31 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728982951 00:03:13.422 09:02:31 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728982951 00:03:13.422 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728982951_collect-cpu-load.pm.log 00:03:13.422 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728982951_collect-vmstat.pm.log 00:03:14.363 09:02:32 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:14.363 09:02:32 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:14.363 09:02:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:14.363 09:02:32 -- common/autotest_common.sh@10 -- # set +x 00:03:14.363 09:02:32 -- spdk/autotest.sh@59 -- # create_test_list 00:03:14.363 09:02:32 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:14.363 09:02:32 -- common/autotest_common.sh@10 -- # set +x 00:03:14.363 09:02:32 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:14.363 09:02:32 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:14.363 09:02:32 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:14.363 09:02:32 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:14.363 09:02:32 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:14.363 09:02:32 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:14.363 09:02:32 -- common/autotest_common.sh@1455 -- # uname 00:03:14.363 09:02:32 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:14.363 09:02:32 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:14.363 09:02:32 -- common/autotest_common.sh@1475 -- # uname 00:03:14.363 09:02:32 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:14.363 09:02:32 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:14.363 09:02:32 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:14.363 lcov: LCOV version 1.15 00:03:14.363 09:02:32 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:29.300 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:29.300 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:47.411 09:03:03 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:47.411 09:03:03 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:47.411 09:03:03 -- common/autotest_common.sh@10 -- # set +x 00:03:47.411 09:03:03 -- spdk/autotest.sh@78 -- # rm -f 00:03:47.411 09:03:03 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:47.411 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:47.411 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:47.411 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:47.411 09:03:04 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:47.411 09:03:04 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:47.411 09:03:04 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:47.411 09:03:04 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:47.411 09:03:04 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:47.411 09:03:04 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:47.411 09:03:04 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:47.411 09:03:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:47.411 09:03:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:47.411 09:03:04 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:47.411 09:03:04 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:03:47.411 09:03:04 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:03:47.411 09:03:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:47.411 09:03:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:47.411 09:03:04 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:47.411 09:03:04 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:03:47.411 09:03:04 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:03:47.411 09:03:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:47.411 09:03:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:47.411 09:03:04 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:47.411 09:03:04 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:03:47.411 09:03:04 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:03:47.411 09:03:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:47.411 09:03:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:47.411 09:03:04 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:47.411 09:03:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:47.411 09:03:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:47.411 09:03:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:47.411 09:03:04 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:47.411 09:03:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:47.411 No valid GPT data, bailing 00:03:47.411 09:03:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:47.411 09:03:04 -- scripts/common.sh@394 -- # pt= 00:03:47.411 09:03:04 -- scripts/common.sh@395 -- # return 1 00:03:47.411 09:03:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:47.411 1+0 records in 00:03:47.411 1+0 records out 00:03:47.411 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00461165 s, 227 MB/s 00:03:47.411 09:03:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:47.411 09:03:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:47.411 09:03:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:47.411 09:03:04 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:47.412 09:03:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:47.412 No valid GPT data, bailing 00:03:47.412 09:03:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:47.412 09:03:04 -- scripts/common.sh@394 -- # pt= 00:03:47.412 09:03:04 -- scripts/common.sh@395 -- # return 1 00:03:47.412 09:03:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:47.412 1+0 records in 00:03:47.412 1+0 records out 00:03:47.412 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00723204 s, 145 MB/s 00:03:47.412 09:03:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:47.412 09:03:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:47.412 09:03:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:47.412 09:03:04 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:47.412 09:03:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:47.412 No valid GPT data, bailing 00:03:47.412 09:03:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:47.412 09:03:04 -- scripts/common.sh@394 -- # pt= 00:03:47.412 09:03:04 -- scripts/common.sh@395 -- # return 1 00:03:47.412 09:03:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:47.412 1+0 records in 00:03:47.412 1+0 records out 00:03:47.412 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00689209 s, 152 MB/s 00:03:47.412 09:03:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:47.412 09:03:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:47.412 09:03:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:47.412 09:03:04 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:47.412 09:03:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:47.412 No valid GPT data, bailing 00:03:47.412 09:03:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:47.412 09:03:04 -- scripts/common.sh@394 -- # pt= 00:03:47.412 09:03:04 -- scripts/common.sh@395 -- # return 1 00:03:47.412 09:03:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:47.412 1+0 records in 00:03:47.412 1+0 records out 00:03:47.412 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0067346 s, 156 MB/s 00:03:47.412 09:03:04 -- spdk/autotest.sh@105 -- # sync 00:03:47.412 09:03:04 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:47.412 09:03:04 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:47.412 09:03:04 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:49.953 09:03:07 -- spdk/autotest.sh@111 -- # uname -s 00:03:49.953 09:03:07 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:49.953 09:03:07 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:49.953 09:03:07 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:50.524 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:50.524 Hugepages 00:03:50.524 node hugesize free / total 00:03:50.524 node0 1048576kB 0 / 0 00:03:50.524 node0 2048kB 0 / 0 00:03:50.524 00:03:50.524 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:50.524 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:50.784 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:50.784 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:50.784 09:03:08 -- spdk/autotest.sh@117 -- # uname -s 00:03:50.784 09:03:08 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:50.784 09:03:08 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:50.784 09:03:08 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:51.723 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:51.723 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:51.723 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:51.723 09:03:09 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:53.105 09:03:10 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:53.105 09:03:10 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:53.106 09:03:10 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:53.106 09:03:10 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:53.106 09:03:10 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:53.106 09:03:10 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:53.106 09:03:10 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:53.106 09:03:10 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:53.106 09:03:10 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:53.106 09:03:10 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:03:53.106 09:03:10 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:53.106 09:03:10 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:53.365 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:53.365 Waiting for block devices as requested 00:03:53.623 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:53.623 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:53.623 09:03:11 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:53.623 09:03:11 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:03:53.623 09:03:11 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:53.623 09:03:11 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:03:53.623 09:03:11 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:53.623 09:03:11 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:03:53.623 09:03:11 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:53.883 09:03:11 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:03:53.883 09:03:11 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:03:53.883 09:03:11 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:03:53.883 09:03:11 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:03:53.883 09:03:11 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:53.883 09:03:11 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:53.883 09:03:11 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:03:53.883 09:03:11 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:53.883 09:03:11 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:53.883 09:03:11 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:03:53.883 09:03:11 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:53.883 09:03:11 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:53.883 09:03:11 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:53.883 09:03:11 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:53.883 09:03:11 -- common/autotest_common.sh@1541 -- # continue 00:03:53.883 09:03:11 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:53.883 09:03:11 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:03:53.883 09:03:11 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:53.883 09:03:11 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:03:53.883 09:03:11 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:53.883 09:03:11 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:03:53.883 09:03:11 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:53.883 09:03:11 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:53.883 09:03:11 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:53.883 09:03:11 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:53.883 09:03:11 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:53.883 09:03:11 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:53.883 09:03:11 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:53.883 09:03:11 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:03:53.883 09:03:11 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:53.883 09:03:11 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:53.883 09:03:11 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:53.883 09:03:11 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:53.883 09:03:11 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:53.883 09:03:11 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:53.883 09:03:11 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:53.883 09:03:11 -- common/autotest_common.sh@1541 -- # continue 00:03:53.883 09:03:11 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:53.883 09:03:11 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:53.883 09:03:11 -- common/autotest_common.sh@10 -- # set +x 00:03:53.883 09:03:11 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:53.883 09:03:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:53.883 09:03:11 -- common/autotest_common.sh@10 -- # set +x 00:03:53.883 09:03:11 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:54.819 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:54.819 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:54.819 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:54.819 09:03:12 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:54.819 09:03:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:54.819 09:03:12 -- common/autotest_common.sh@10 -- # set +x 00:03:55.078 09:03:12 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:55.078 09:03:12 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:55.078 09:03:12 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:55.078 09:03:12 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:55.078 09:03:12 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:55.078 09:03:12 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:55.078 09:03:12 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:55.078 09:03:12 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:55.078 09:03:12 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:55.078 09:03:12 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:55.078 09:03:12 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:55.078 09:03:12 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:55.078 09:03:12 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:55.078 09:03:12 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:03:55.078 09:03:12 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:55.078 09:03:12 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:55.078 09:03:12 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:55.078 09:03:12 -- common/autotest_common.sh@1564 -- # device=0x0010 00:03:55.078 09:03:12 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:55.078 09:03:12 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:55.078 09:03:12 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:03:55.078 09:03:12 -- common/autotest_common.sh@1564 -- # device=0x0010 00:03:55.078 09:03:12 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:55.078 09:03:12 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:03:55.078 09:03:12 -- common/autotest_common.sh@1570 -- # return 0 00:03:55.078 09:03:12 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:03:55.078 09:03:12 -- common/autotest_common.sh@1578 -- # return 0 00:03:55.078 09:03:12 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:55.078 09:03:12 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:55.078 09:03:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:55.078 09:03:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:55.078 09:03:12 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:55.078 09:03:12 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:55.078 09:03:12 -- common/autotest_common.sh@10 -- # set +x 00:03:55.078 09:03:12 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:55.078 09:03:12 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:55.078 09:03:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:55.079 09:03:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:55.079 09:03:12 -- common/autotest_common.sh@10 -- # set +x 00:03:55.079 ************************************ 00:03:55.079 START TEST env 00:03:55.079 ************************************ 00:03:55.079 09:03:12 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:55.337 * Looking for test storage... 00:03:55.337 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:55.337 09:03:13 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:55.337 09:03:13 env -- common/autotest_common.sh@1691 -- # lcov --version 00:03:55.337 09:03:13 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:55.337 09:03:13 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:55.337 09:03:13 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:55.337 09:03:13 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:55.337 09:03:13 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:55.337 09:03:13 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:55.337 09:03:13 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:55.337 09:03:13 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:55.337 09:03:13 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:55.337 09:03:13 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:55.337 09:03:13 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:55.337 09:03:13 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:55.337 09:03:13 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:55.337 09:03:13 env -- scripts/common.sh@344 -- # case "$op" in 00:03:55.337 09:03:13 env -- scripts/common.sh@345 -- # : 1 00:03:55.337 09:03:13 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:55.337 09:03:13 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:55.337 09:03:13 env -- scripts/common.sh@365 -- # decimal 1 00:03:55.337 09:03:13 env -- scripts/common.sh@353 -- # local d=1 00:03:55.337 09:03:13 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:55.337 09:03:13 env -- scripts/common.sh@355 -- # echo 1 00:03:55.337 09:03:13 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:55.337 09:03:13 env -- scripts/common.sh@366 -- # decimal 2 00:03:55.337 09:03:13 env -- scripts/common.sh@353 -- # local d=2 00:03:55.337 09:03:13 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:55.337 09:03:13 env -- scripts/common.sh@355 -- # echo 2 00:03:55.337 09:03:13 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:55.337 09:03:13 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:55.337 09:03:13 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:55.337 09:03:13 env -- scripts/common.sh@368 -- # return 0 00:03:55.337 09:03:13 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:55.337 09:03:13 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:55.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.337 --rc genhtml_branch_coverage=1 00:03:55.337 --rc genhtml_function_coverage=1 00:03:55.337 --rc genhtml_legend=1 00:03:55.337 --rc geninfo_all_blocks=1 00:03:55.337 --rc geninfo_unexecuted_blocks=1 00:03:55.337 00:03:55.337 ' 00:03:55.337 09:03:13 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:55.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.337 --rc genhtml_branch_coverage=1 00:03:55.337 --rc genhtml_function_coverage=1 00:03:55.337 --rc genhtml_legend=1 00:03:55.337 --rc geninfo_all_blocks=1 00:03:55.337 --rc geninfo_unexecuted_blocks=1 00:03:55.337 00:03:55.337 ' 00:03:55.337 09:03:13 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:55.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.337 --rc genhtml_branch_coverage=1 00:03:55.337 --rc genhtml_function_coverage=1 00:03:55.337 --rc genhtml_legend=1 00:03:55.337 --rc geninfo_all_blocks=1 00:03:55.337 --rc geninfo_unexecuted_blocks=1 00:03:55.337 00:03:55.337 ' 00:03:55.337 09:03:13 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:55.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.337 --rc genhtml_branch_coverage=1 00:03:55.337 --rc genhtml_function_coverage=1 00:03:55.337 --rc genhtml_legend=1 00:03:55.337 --rc geninfo_all_blocks=1 00:03:55.338 --rc geninfo_unexecuted_blocks=1 00:03:55.338 00:03:55.338 ' 00:03:55.338 09:03:13 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:55.338 09:03:13 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:55.338 09:03:13 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:55.338 09:03:13 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.338 ************************************ 00:03:55.338 START TEST env_memory 00:03:55.338 ************************************ 00:03:55.338 09:03:13 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:55.338 00:03:55.338 00:03:55.338 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.338 http://cunit.sourceforge.net/ 00:03:55.338 00:03:55.338 00:03:55.338 Suite: memory 00:03:55.338 Test: alloc and free memory map ...[2024-10-15 09:03:13.216668] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:55.597 passed 00:03:55.597 Test: mem map translation ...[2024-10-15 09:03:13.260914] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:55.597 [2024-10-15 09:03:13.260962] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:55.597 [2024-10-15 09:03:13.261024] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:55.597 [2024-10-15 09:03:13.261059] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:55.597 passed 00:03:55.597 Test: mem map registration ...[2024-10-15 09:03:13.328409] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:55.597 [2024-10-15 09:03:13.328451] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:55.597 passed 00:03:55.597 Test: mem map adjacent registrations ...passed 00:03:55.597 00:03:55.597 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.597 suites 1 1 n/a 0 0 00:03:55.597 tests 4 4 4 0 0 00:03:55.597 asserts 152 152 152 0 n/a 00:03:55.597 00:03:55.597 Elapsed time = 0.243 seconds 00:03:55.597 00:03:55.597 real 0m0.297s 00:03:55.597 user 0m0.258s 00:03:55.597 sys 0m0.027s 00:03:55.597 09:03:13 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:55.597 09:03:13 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:55.597 ************************************ 00:03:55.597 END TEST env_memory 00:03:55.597 ************************************ 00:03:55.856 09:03:13 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:55.856 09:03:13 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:55.856 09:03:13 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:55.856 09:03:13 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.856 ************************************ 00:03:55.856 START TEST env_vtophys 00:03:55.856 ************************************ 00:03:55.856 09:03:13 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:55.856 EAL: lib.eal log level changed from notice to debug 00:03:55.856 EAL: Detected lcore 0 as core 0 on socket 0 00:03:55.856 EAL: Detected lcore 1 as core 0 on socket 0 00:03:55.856 EAL: Detected lcore 2 as core 0 on socket 0 00:03:55.856 EAL: Detected lcore 3 as core 0 on socket 0 00:03:55.856 EAL: Detected lcore 4 as core 0 on socket 0 00:03:55.856 EAL: Detected lcore 5 as core 0 on socket 0 00:03:55.856 EAL: Detected lcore 6 as core 0 on socket 0 00:03:55.856 EAL: Detected lcore 7 as core 0 on socket 0 00:03:55.856 EAL: Detected lcore 8 as core 0 on socket 0 00:03:55.856 EAL: Detected lcore 9 as core 0 on socket 0 00:03:55.856 EAL: Maximum logical cores by configuration: 128 00:03:55.856 EAL: Detected CPU lcores: 10 00:03:55.856 EAL: Detected NUMA nodes: 1 00:03:55.856 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:55.856 EAL: Detected shared linkage of DPDK 00:03:55.856 EAL: No shared files mode enabled, IPC will be disabled 00:03:55.856 EAL: Selected IOVA mode 'PA' 00:03:55.856 EAL: Probing VFIO support... 00:03:55.856 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:55.856 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:55.856 EAL: Ask a virtual area of 0x2e000 bytes 00:03:55.856 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:55.856 EAL: Setting up physically contiguous memory... 00:03:55.856 EAL: Setting maximum number of open files to 524288 00:03:55.856 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:55.856 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:55.856 EAL: Ask a virtual area of 0x61000 bytes 00:03:55.856 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:55.856 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:55.856 EAL: Ask a virtual area of 0x400000000 bytes 00:03:55.856 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:55.856 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:55.856 EAL: Ask a virtual area of 0x61000 bytes 00:03:55.856 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:55.856 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:55.856 EAL: Ask a virtual area of 0x400000000 bytes 00:03:55.856 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:55.856 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:55.856 EAL: Ask a virtual area of 0x61000 bytes 00:03:55.856 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:55.856 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:55.856 EAL: Ask a virtual area of 0x400000000 bytes 00:03:55.856 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:55.856 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:55.856 EAL: Ask a virtual area of 0x61000 bytes 00:03:55.856 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:55.856 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:55.856 EAL: Ask a virtual area of 0x400000000 bytes 00:03:55.856 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:55.856 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:55.856 EAL: Hugepages will be freed exactly as allocated. 00:03:55.856 EAL: No shared files mode enabled, IPC is disabled 00:03:55.856 EAL: No shared files mode enabled, IPC is disabled 00:03:55.856 EAL: TSC frequency is ~2290000 KHz 00:03:55.856 EAL: Main lcore 0 is ready (tid=7f236393ca40;cpuset=[0]) 00:03:55.856 EAL: Trying to obtain current memory policy. 00:03:55.856 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.856 EAL: Restoring previous memory policy: 0 00:03:55.856 EAL: request: mp_malloc_sync 00:03:55.856 EAL: No shared files mode enabled, IPC is disabled 00:03:55.856 EAL: Heap on socket 0 was expanded by 2MB 00:03:55.856 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:55.856 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:55.856 EAL: Mem event callback 'spdk:(nil)' registered 00:03:55.856 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:55.856 00:03:55.856 00:03:55.856 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.856 http://cunit.sourceforge.net/ 00:03:55.856 00:03:55.856 00:03:55.856 Suite: components_suite 00:03:56.423 Test: vtophys_malloc_test ...passed 00:03:56.424 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:56.424 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.424 EAL: Restoring previous memory policy: 4 00:03:56.424 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.424 EAL: request: mp_malloc_sync 00:03:56.424 EAL: No shared files mode enabled, IPC is disabled 00:03:56.424 EAL: Heap on socket 0 was expanded by 4MB 00:03:56.424 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.424 EAL: request: mp_malloc_sync 00:03:56.424 EAL: No shared files mode enabled, IPC is disabled 00:03:56.424 EAL: Heap on socket 0 was shrunk by 4MB 00:03:56.424 EAL: Trying to obtain current memory policy. 00:03:56.424 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.424 EAL: Restoring previous memory policy: 4 00:03:56.424 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.424 EAL: request: mp_malloc_sync 00:03:56.424 EAL: No shared files mode enabled, IPC is disabled 00:03:56.424 EAL: Heap on socket 0 was expanded by 6MB 00:03:56.424 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.424 EAL: request: mp_malloc_sync 00:03:56.424 EAL: No shared files mode enabled, IPC is disabled 00:03:56.424 EAL: Heap on socket 0 was shrunk by 6MB 00:03:56.424 EAL: Trying to obtain current memory policy. 00:03:56.424 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.424 EAL: Restoring previous memory policy: 4 00:03:56.424 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.424 EAL: request: mp_malloc_sync 00:03:56.424 EAL: No shared files mode enabled, IPC is disabled 00:03:56.424 EAL: Heap on socket 0 was expanded by 10MB 00:03:56.424 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.424 EAL: request: mp_malloc_sync 00:03:56.424 EAL: No shared files mode enabled, IPC is disabled 00:03:56.424 EAL: Heap on socket 0 was shrunk by 10MB 00:03:56.424 EAL: Trying to obtain current memory policy. 00:03:56.424 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.424 EAL: Restoring previous memory policy: 4 00:03:56.424 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.424 EAL: request: mp_malloc_sync 00:03:56.424 EAL: No shared files mode enabled, IPC is disabled 00:03:56.424 EAL: Heap on socket 0 was expanded by 18MB 00:03:56.424 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.424 EAL: request: mp_malloc_sync 00:03:56.424 EAL: No shared files mode enabled, IPC is disabled 00:03:56.424 EAL: Heap on socket 0 was shrunk by 18MB 00:03:56.424 EAL: Trying to obtain current memory policy. 00:03:56.424 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.424 EAL: Restoring previous memory policy: 4 00:03:56.424 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.424 EAL: request: mp_malloc_sync 00:03:56.424 EAL: No shared files mode enabled, IPC is disabled 00:03:56.424 EAL: Heap on socket 0 was expanded by 34MB 00:03:56.424 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.424 EAL: request: mp_malloc_sync 00:03:56.424 EAL: No shared files mode enabled, IPC is disabled 00:03:56.424 EAL: Heap on socket 0 was shrunk by 34MB 00:03:56.683 EAL: Trying to obtain current memory policy. 00:03:56.683 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.683 EAL: Restoring previous memory policy: 4 00:03:56.683 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.683 EAL: request: mp_malloc_sync 00:03:56.683 EAL: No shared files mode enabled, IPC is disabled 00:03:56.683 EAL: Heap on socket 0 was expanded by 66MB 00:03:56.683 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.683 EAL: request: mp_malloc_sync 00:03:56.683 EAL: No shared files mode enabled, IPC is disabled 00:03:56.683 EAL: Heap on socket 0 was shrunk by 66MB 00:03:56.683 EAL: Trying to obtain current memory policy. 00:03:56.683 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.942 EAL: Restoring previous memory policy: 4 00:03:56.942 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.942 EAL: request: mp_malloc_sync 00:03:56.942 EAL: No shared files mode enabled, IPC is disabled 00:03:56.942 EAL: Heap on socket 0 was expanded by 130MB 00:03:56.942 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.201 EAL: request: mp_malloc_sync 00:03:57.201 EAL: No shared files mode enabled, IPC is disabled 00:03:57.201 EAL: Heap on socket 0 was shrunk by 130MB 00:03:57.201 EAL: Trying to obtain current memory policy. 00:03:57.201 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.464 EAL: Restoring previous memory policy: 4 00:03:57.464 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.464 EAL: request: mp_malloc_sync 00:03:57.464 EAL: No shared files mode enabled, IPC is disabled 00:03:57.464 EAL: Heap on socket 0 was expanded by 258MB 00:03:57.728 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.728 EAL: request: mp_malloc_sync 00:03:57.728 EAL: No shared files mode enabled, IPC is disabled 00:03:57.728 EAL: Heap on socket 0 was shrunk by 258MB 00:03:58.296 EAL: Trying to obtain current memory policy. 00:03:58.296 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.296 EAL: Restoring previous memory policy: 4 00:03:58.296 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.296 EAL: request: mp_malloc_sync 00:03:58.296 EAL: No shared files mode enabled, IPC is disabled 00:03:58.296 EAL: Heap on socket 0 was expanded by 514MB 00:03:59.263 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.263 EAL: request: mp_malloc_sync 00:03:59.263 EAL: No shared files mode enabled, IPC is disabled 00:03:59.263 EAL: Heap on socket 0 was shrunk by 514MB 00:04:00.197 EAL: Trying to obtain current memory policy. 00:04:00.197 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.454 EAL: Restoring previous memory policy: 4 00:04:00.454 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.454 EAL: request: mp_malloc_sync 00:04:00.454 EAL: No shared files mode enabled, IPC is disabled 00:04:00.454 EAL: Heap on socket 0 was expanded by 1026MB 00:04:02.356 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.356 EAL: request: mp_malloc_sync 00:04:02.356 EAL: No shared files mode enabled, IPC is disabled 00:04:02.356 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:04.258 passed 00:04:04.258 00:04:04.258 Run Summary: Type Total Ran Passed Failed Inactive 00:04:04.258 suites 1 1 n/a 0 0 00:04:04.258 tests 2 2 2 0 0 00:04:04.258 asserts 5782 5782 5782 0 n/a 00:04:04.258 00:04:04.258 Elapsed time = 8.141 seconds 00:04:04.258 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.258 EAL: request: mp_malloc_sync 00:04:04.258 EAL: No shared files mode enabled, IPC is disabled 00:04:04.258 EAL: Heap on socket 0 was shrunk by 2MB 00:04:04.258 EAL: No shared files mode enabled, IPC is disabled 00:04:04.258 EAL: No shared files mode enabled, IPC is disabled 00:04:04.258 EAL: No shared files mode enabled, IPC is disabled 00:04:04.258 00:04:04.258 real 0m8.462s 00:04:04.258 user 0m7.466s 00:04:04.258 sys 0m0.841s 00:04:04.258 09:03:21 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:04.258 09:03:21 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:04.258 ************************************ 00:04:04.258 END TEST env_vtophys 00:04:04.258 ************************************ 00:04:04.259 09:03:22 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:04.259 09:03:22 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:04.259 09:03:22 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:04.259 09:03:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.259 ************************************ 00:04:04.259 START TEST env_pci 00:04:04.259 ************************************ 00:04:04.259 09:03:22 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:04.259 00:04:04.259 00:04:04.259 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.259 http://cunit.sourceforge.net/ 00:04:04.259 00:04:04.259 00:04:04.259 Suite: pci 00:04:04.259 Test: pci_hook ...[2024-10-15 09:03:22.084634] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1111:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56712 has claimed it 00:04:04.259 passed 00:04:04.259 00:04:04.259 Run Summary: Type Total Ran Passed Failed Inactive 00:04:04.259 suites 1 1 n/a 0 0 00:04:04.259 tests 1 1 1 0 0 00:04:04.259 asserts 25 25 25 0 n/a 00:04:04.259 00:04:04.259 Elapsed time = 0.006 seconds 00:04:04.259 EAL: Cannot find device (10000:00:01.0) 00:04:04.259 EAL: Failed to attach device on primary process 00:04:04.259 00:04:04.259 real 0m0.108s 00:04:04.259 user 0m0.050s 00:04:04.259 sys 0m0.058s 00:04:04.259 09:03:22 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:04.259 09:03:22 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:04.259 ************************************ 00:04:04.259 END TEST env_pci 00:04:04.259 ************************************ 00:04:04.519 09:03:22 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:04.519 09:03:22 env -- env/env.sh@15 -- # uname 00:04:04.519 09:03:22 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:04.519 09:03:22 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:04.519 09:03:22 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:04.519 09:03:22 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:04.519 09:03:22 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:04.519 09:03:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.519 ************************************ 00:04:04.519 START TEST env_dpdk_post_init 00:04:04.519 ************************************ 00:04:04.519 09:03:22 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:04.519 EAL: Detected CPU lcores: 10 00:04:04.519 EAL: Detected NUMA nodes: 1 00:04:04.519 EAL: Detected shared linkage of DPDK 00:04:04.519 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:04.519 EAL: Selected IOVA mode 'PA' 00:04:04.519 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:04.778 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:04.778 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:04.778 Starting DPDK initialization... 00:04:04.778 Starting SPDK post initialization... 00:04:04.778 SPDK NVMe probe 00:04:04.779 Attaching to 0000:00:10.0 00:04:04.779 Attaching to 0000:00:11.0 00:04:04.779 Attached to 0000:00:10.0 00:04:04.779 Attached to 0000:00:11.0 00:04:04.779 Cleaning up... 00:04:04.779 00:04:04.779 real 0m0.273s 00:04:04.779 user 0m0.078s 00:04:04.779 sys 0m0.095s 00:04:04.779 09:03:22 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:04.779 09:03:22 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:04.779 ************************************ 00:04:04.779 END TEST env_dpdk_post_init 00:04:04.779 ************************************ 00:04:04.779 09:03:22 env -- env/env.sh@26 -- # uname 00:04:04.779 09:03:22 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:04.779 09:03:22 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:04.779 09:03:22 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:04.779 09:03:22 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:04.779 09:03:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.779 ************************************ 00:04:04.779 START TEST env_mem_callbacks 00:04:04.779 ************************************ 00:04:04.779 09:03:22 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:04.779 EAL: Detected CPU lcores: 10 00:04:04.779 EAL: Detected NUMA nodes: 1 00:04:04.779 EAL: Detected shared linkage of DPDK 00:04:04.779 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:04.779 EAL: Selected IOVA mode 'PA' 00:04:05.038 00:04:05.038 00:04:05.038 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.038 http://cunit.sourceforge.net/ 00:04:05.038 00:04:05.038 00:04:05.038 Suite: memoryTELEMETRY: No legacy callbacks, legacy socket not created 00:04:05.038 00:04:05.038 Test: test ... 00:04:05.038 register 0x200000200000 2097152 00:04:05.038 malloc 3145728 00:04:05.038 register 0x200000400000 4194304 00:04:05.038 buf 0x2000004fffc0 len 3145728 PASSED 00:04:05.038 malloc 64 00:04:05.038 buf 0x2000004ffec0 len 64 PASSED 00:04:05.038 malloc 4194304 00:04:05.038 register 0x200000800000 6291456 00:04:05.038 buf 0x2000009fffc0 len 4194304 PASSED 00:04:05.038 free 0x2000004fffc0 3145728 00:04:05.038 free 0x2000004ffec0 64 00:04:05.038 unregister 0x200000400000 4194304 PASSED 00:04:05.038 free 0x2000009fffc0 4194304 00:04:05.038 unregister 0x200000800000 6291456 PASSED 00:04:05.038 malloc 8388608 00:04:05.038 register 0x200000400000 10485760 00:04:05.038 buf 0x2000005fffc0 len 8388608 PASSED 00:04:05.038 free 0x2000005fffc0 8388608 00:04:05.038 unregister 0x200000400000 10485760 PASSED 00:04:05.038 passed 00:04:05.038 00:04:05.038 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.038 suites 1 1 n/a 0 0 00:04:05.038 tests 1 1 1 0 0 00:04:05.038 asserts 15 15 15 0 n/a 00:04:05.038 00:04:05.038 Elapsed time = 0.083 seconds 00:04:05.038 00:04:05.038 real 0m0.280s 00:04:05.038 user 0m0.108s 00:04:05.038 sys 0m0.070s 00:04:05.038 09:03:22 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.038 09:03:22 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:05.038 ************************************ 00:04:05.038 END TEST env_mem_callbacks 00:04:05.038 ************************************ 00:04:05.038 ************************************ 00:04:05.038 END TEST env 00:04:05.038 ************************************ 00:04:05.038 00:04:05.038 real 0m9.998s 00:04:05.038 user 0m8.192s 00:04:05.038 sys 0m1.450s 00:04:05.038 09:03:22 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.038 09:03:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.299 09:03:22 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:05.299 09:03:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.299 09:03:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.299 09:03:22 -- common/autotest_common.sh@10 -- # set +x 00:04:05.299 ************************************ 00:04:05.299 START TEST rpc 00:04:05.299 ************************************ 00:04:05.299 09:03:22 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:05.299 * Looking for test storage... 00:04:05.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:05.299 09:03:23 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:05.299 09:03:23 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:05.299 09:03:23 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:05.299 09:03:23 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:05.299 09:03:23 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:05.299 09:03:23 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:05.299 09:03:23 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:05.299 09:03:23 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.299 09:03:23 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:05.299 09:03:23 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:05.299 09:03:23 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:05.299 09:03:23 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:05.299 09:03:23 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:05.299 09:03:23 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:05.299 09:03:23 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:05.299 09:03:23 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:05.299 09:03:23 rpc -- scripts/common.sh@345 -- # : 1 00:04:05.299 09:03:23 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:05.299 09:03:23 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.299 09:03:23 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:05.299 09:03:23 rpc -- scripts/common.sh@353 -- # local d=1 00:04:05.299 09:03:23 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.299 09:03:23 rpc -- scripts/common.sh@355 -- # echo 1 00:04:05.299 09:03:23 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:05.299 09:03:23 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:05.299 09:03:23 rpc -- scripts/common.sh@353 -- # local d=2 00:04:05.299 09:03:23 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.299 09:03:23 rpc -- scripts/common.sh@355 -- # echo 2 00:04:05.299 09:03:23 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:05.299 09:03:23 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:05.299 09:03:23 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:05.299 09:03:23 rpc -- scripts/common.sh@368 -- # return 0 00:04:05.299 09:03:23 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.299 09:03:23 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:05.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.299 --rc genhtml_branch_coverage=1 00:04:05.299 --rc genhtml_function_coverage=1 00:04:05.299 --rc genhtml_legend=1 00:04:05.299 --rc geninfo_all_blocks=1 00:04:05.299 --rc geninfo_unexecuted_blocks=1 00:04:05.299 00:04:05.299 ' 00:04:05.299 09:03:23 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:05.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.299 --rc genhtml_branch_coverage=1 00:04:05.299 --rc genhtml_function_coverage=1 00:04:05.299 --rc genhtml_legend=1 00:04:05.299 --rc geninfo_all_blocks=1 00:04:05.299 --rc geninfo_unexecuted_blocks=1 00:04:05.299 00:04:05.299 ' 00:04:05.299 09:03:23 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:05.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.299 --rc genhtml_branch_coverage=1 00:04:05.299 --rc genhtml_function_coverage=1 00:04:05.299 --rc genhtml_legend=1 00:04:05.299 --rc geninfo_all_blocks=1 00:04:05.299 --rc geninfo_unexecuted_blocks=1 00:04:05.299 00:04:05.299 ' 00:04:05.299 09:03:23 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:05.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.299 --rc genhtml_branch_coverage=1 00:04:05.299 --rc genhtml_function_coverage=1 00:04:05.299 --rc genhtml_legend=1 00:04:05.299 --rc geninfo_all_blocks=1 00:04:05.299 --rc geninfo_unexecuted_blocks=1 00:04:05.299 00:04:05.299 ' 00:04:05.299 09:03:23 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56839 00:04:05.299 09:03:23 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:05.299 09:03:23 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:05.299 09:03:23 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56839 00:04:05.299 09:03:23 rpc -- common/autotest_common.sh@831 -- # '[' -z 56839 ']' 00:04:05.299 09:03:23 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:05.299 09:03:23 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:05.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:05.299 09:03:23 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:05.299 09:03:23 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:05.299 09:03:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.558 [2024-10-15 09:03:23.294628] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:04:05.558 [2024-10-15 09:03:23.294772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56839 ] 00:04:05.817 [2024-10-15 09:03:23.461029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.817 [2024-10-15 09:03:23.574747] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:05.817 [2024-10-15 09:03:23.574819] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56839' to capture a snapshot of events at runtime. 00:04:05.817 [2024-10-15 09:03:23.574830] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:05.817 [2024-10-15 09:03:23.574840] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:05.817 [2024-10-15 09:03:23.574848] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56839 for offline analysis/debug. 00:04:05.817 [2024-10-15 09:03:23.576131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.753 09:03:24 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:06.753 09:03:24 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:06.753 09:03:24 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:06.753 09:03:24 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:06.753 09:03:24 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:06.753 09:03:24 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:06.753 09:03:24 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.753 09:03:24 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.754 09:03:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.754 ************************************ 00:04:06.754 START TEST rpc_integrity 00:04:06.754 ************************************ 00:04:06.754 09:03:24 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:06.754 09:03:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:06.754 09:03:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.754 09:03:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.754 09:03:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.754 09:03:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:06.754 09:03:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:06.754 09:03:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:06.754 09:03:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:06.754 09:03:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.754 09:03:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.754 09:03:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.754 09:03:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:06.754 09:03:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:06.754 09:03:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.754 09:03:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.754 09:03:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.754 09:03:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:06.754 { 00:04:06.754 "name": "Malloc0", 00:04:06.754 "aliases": [ 00:04:06.754 "47e74bf2-2392-4d62-bd8d-428d1f470900" 00:04:06.754 ], 00:04:06.754 "product_name": "Malloc disk", 00:04:06.754 "block_size": 512, 00:04:06.754 "num_blocks": 16384, 00:04:06.754 "uuid": "47e74bf2-2392-4d62-bd8d-428d1f470900", 00:04:06.754 "assigned_rate_limits": { 00:04:06.754 "rw_ios_per_sec": 0, 00:04:06.754 "rw_mbytes_per_sec": 0, 00:04:06.754 "r_mbytes_per_sec": 0, 00:04:06.754 "w_mbytes_per_sec": 0 00:04:06.754 }, 00:04:06.754 "claimed": false, 00:04:06.754 "zoned": false, 00:04:06.754 "supported_io_types": { 00:04:06.754 "read": true, 00:04:06.754 "write": true, 00:04:06.754 "unmap": true, 00:04:06.754 "flush": true, 00:04:06.754 "reset": true, 00:04:06.754 "nvme_admin": false, 00:04:06.754 "nvme_io": false, 00:04:06.754 "nvme_io_md": false, 00:04:06.754 "write_zeroes": true, 00:04:06.754 "zcopy": true, 00:04:06.754 "get_zone_info": false, 00:04:06.754 "zone_management": false, 00:04:06.754 "zone_append": false, 00:04:06.754 "compare": false, 00:04:06.754 "compare_and_write": false, 00:04:06.754 "abort": true, 00:04:06.754 "seek_hole": false, 00:04:06.754 "seek_data": false, 00:04:06.754 "copy": true, 00:04:06.754 "nvme_iov_md": false 00:04:06.754 }, 00:04:06.754 "memory_domains": [ 00:04:06.754 { 00:04:06.754 "dma_device_id": "system", 00:04:06.754 "dma_device_type": 1 00:04:06.754 }, 00:04:06.754 { 00:04:06.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.754 "dma_device_type": 2 00:04:06.754 } 00:04:06.754 ], 00:04:06.754 "driver_specific": {} 00:04:06.754 } 00:04:06.754 ]' 00:04:06.754 09:03:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:06.754 09:03:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:06.754 09:03:24 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:06.754 09:03:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.754 09:03:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.754 [2024-10-15 09:03:24.625426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:06.754 [2024-10-15 09:03:24.625490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:06.754 [2024-10-15 09:03:24.625514] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:06.754 [2024-10-15 09:03:24.625528] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:06.754 [2024-10-15 09:03:24.627834] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:06.754 [2024-10-15 09:03:24.627891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:06.754 Passthru0 00:04:06.754 09:03:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.754 09:03:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:06.754 09:03:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.754 09:03:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.013 09:03:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.013 09:03:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:07.013 { 00:04:07.013 "name": "Malloc0", 00:04:07.013 "aliases": [ 00:04:07.013 "47e74bf2-2392-4d62-bd8d-428d1f470900" 00:04:07.013 ], 00:04:07.013 "product_name": "Malloc disk", 00:04:07.013 "block_size": 512, 00:04:07.013 "num_blocks": 16384, 00:04:07.013 "uuid": "47e74bf2-2392-4d62-bd8d-428d1f470900", 00:04:07.013 "assigned_rate_limits": { 00:04:07.013 "rw_ios_per_sec": 0, 00:04:07.013 "rw_mbytes_per_sec": 0, 00:04:07.013 "r_mbytes_per_sec": 0, 00:04:07.013 "w_mbytes_per_sec": 0 00:04:07.013 }, 00:04:07.013 "claimed": true, 00:04:07.013 "claim_type": "exclusive_write", 00:04:07.013 "zoned": false, 00:04:07.013 "supported_io_types": { 00:04:07.013 "read": true, 00:04:07.013 "write": true, 00:04:07.013 "unmap": true, 00:04:07.013 "flush": true, 00:04:07.013 "reset": true, 00:04:07.013 "nvme_admin": false, 00:04:07.013 "nvme_io": false, 00:04:07.013 "nvme_io_md": false, 00:04:07.013 "write_zeroes": true, 00:04:07.013 "zcopy": true, 00:04:07.013 "get_zone_info": false, 00:04:07.013 "zone_management": false, 00:04:07.013 "zone_append": false, 00:04:07.013 "compare": false, 00:04:07.013 "compare_and_write": false, 00:04:07.013 "abort": true, 00:04:07.013 "seek_hole": false, 00:04:07.013 "seek_data": false, 00:04:07.013 "copy": true, 00:04:07.013 "nvme_iov_md": false 00:04:07.013 }, 00:04:07.013 "memory_domains": [ 00:04:07.013 { 00:04:07.013 "dma_device_id": "system", 00:04:07.013 "dma_device_type": 1 00:04:07.013 }, 00:04:07.013 { 00:04:07.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.013 "dma_device_type": 2 00:04:07.013 } 00:04:07.013 ], 00:04:07.013 "driver_specific": {} 00:04:07.013 }, 00:04:07.013 { 00:04:07.013 "name": "Passthru0", 00:04:07.013 "aliases": [ 00:04:07.013 "d000490d-7a48-573b-9630-66d4c81e5837" 00:04:07.013 ], 00:04:07.013 "product_name": "passthru", 00:04:07.013 "block_size": 512, 00:04:07.013 "num_blocks": 16384, 00:04:07.013 "uuid": "d000490d-7a48-573b-9630-66d4c81e5837", 00:04:07.013 "assigned_rate_limits": { 00:04:07.013 "rw_ios_per_sec": 0, 00:04:07.013 "rw_mbytes_per_sec": 0, 00:04:07.013 "r_mbytes_per_sec": 0, 00:04:07.013 "w_mbytes_per_sec": 0 00:04:07.013 }, 00:04:07.013 "claimed": false, 00:04:07.013 "zoned": false, 00:04:07.013 "supported_io_types": { 00:04:07.013 "read": true, 00:04:07.013 "write": true, 00:04:07.013 "unmap": true, 00:04:07.013 "flush": true, 00:04:07.013 "reset": true, 00:04:07.013 "nvme_admin": false, 00:04:07.013 "nvme_io": false, 00:04:07.013 "nvme_io_md": false, 00:04:07.013 "write_zeroes": true, 00:04:07.013 "zcopy": true, 00:04:07.013 "get_zone_info": false, 00:04:07.013 "zone_management": false, 00:04:07.013 "zone_append": false, 00:04:07.013 "compare": false, 00:04:07.013 "compare_and_write": false, 00:04:07.013 "abort": true, 00:04:07.013 "seek_hole": false, 00:04:07.013 "seek_data": false, 00:04:07.013 "copy": true, 00:04:07.013 "nvme_iov_md": false 00:04:07.013 }, 00:04:07.013 "memory_domains": [ 00:04:07.013 { 00:04:07.013 "dma_device_id": "system", 00:04:07.013 "dma_device_type": 1 00:04:07.013 }, 00:04:07.013 { 00:04:07.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.013 "dma_device_type": 2 00:04:07.013 } 00:04:07.013 ], 00:04:07.013 "driver_specific": { 00:04:07.013 "passthru": { 00:04:07.013 "name": "Passthru0", 00:04:07.013 "base_bdev_name": "Malloc0" 00:04:07.013 } 00:04:07.013 } 00:04:07.013 } 00:04:07.013 ]' 00:04:07.013 09:03:24 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:07.013 09:03:24 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:07.013 09:03:24 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:07.013 09:03:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.013 09:03:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.013 09:03:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.013 09:03:24 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:07.013 09:03:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.013 09:03:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.013 09:03:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.013 09:03:24 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:07.014 09:03:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.014 09:03:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.014 09:03:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.014 09:03:24 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:07.014 09:03:24 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:07.014 09:03:24 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:07.014 00:04:07.014 real 0m0.368s 00:04:07.014 user 0m0.202s 00:04:07.014 sys 0m0.058s 00:04:07.014 09:03:24 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.014 09:03:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.014 ************************************ 00:04:07.014 END TEST rpc_integrity 00:04:07.014 ************************************ 00:04:07.014 09:03:24 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:07.014 09:03:24 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.014 09:03:24 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.014 09:03:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.014 ************************************ 00:04:07.014 START TEST rpc_plugins 00:04:07.014 ************************************ 00:04:07.014 09:03:24 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:07.014 09:03:24 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:07.014 09:03:24 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.014 09:03:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.014 09:03:24 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.014 09:03:24 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:07.272 09:03:24 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:07.272 09:03:24 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.272 09:03:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.272 09:03:24 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.272 09:03:24 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:07.272 { 00:04:07.272 "name": "Malloc1", 00:04:07.272 "aliases": [ 00:04:07.272 "5f15bc85-a495-4bff-ab6b-625530e8e9ae" 00:04:07.272 ], 00:04:07.272 "product_name": "Malloc disk", 00:04:07.272 "block_size": 4096, 00:04:07.272 "num_blocks": 256, 00:04:07.272 "uuid": "5f15bc85-a495-4bff-ab6b-625530e8e9ae", 00:04:07.272 "assigned_rate_limits": { 00:04:07.272 "rw_ios_per_sec": 0, 00:04:07.272 "rw_mbytes_per_sec": 0, 00:04:07.272 "r_mbytes_per_sec": 0, 00:04:07.272 "w_mbytes_per_sec": 0 00:04:07.272 }, 00:04:07.272 "claimed": false, 00:04:07.272 "zoned": false, 00:04:07.272 "supported_io_types": { 00:04:07.272 "read": true, 00:04:07.272 "write": true, 00:04:07.272 "unmap": true, 00:04:07.272 "flush": true, 00:04:07.273 "reset": true, 00:04:07.273 "nvme_admin": false, 00:04:07.273 "nvme_io": false, 00:04:07.273 "nvme_io_md": false, 00:04:07.273 "write_zeroes": true, 00:04:07.273 "zcopy": true, 00:04:07.273 "get_zone_info": false, 00:04:07.273 "zone_management": false, 00:04:07.273 "zone_append": false, 00:04:07.273 "compare": false, 00:04:07.273 "compare_and_write": false, 00:04:07.273 "abort": true, 00:04:07.273 "seek_hole": false, 00:04:07.273 "seek_data": false, 00:04:07.273 "copy": true, 00:04:07.273 "nvme_iov_md": false 00:04:07.273 }, 00:04:07.273 "memory_domains": [ 00:04:07.273 { 00:04:07.273 "dma_device_id": "system", 00:04:07.273 "dma_device_type": 1 00:04:07.273 }, 00:04:07.273 { 00:04:07.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.273 "dma_device_type": 2 00:04:07.273 } 00:04:07.273 ], 00:04:07.273 "driver_specific": {} 00:04:07.273 } 00:04:07.273 ]' 00:04:07.273 09:03:24 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:07.273 09:03:24 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:07.273 09:03:24 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:07.273 09:03:24 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.273 09:03:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.273 09:03:25 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.273 09:03:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:07.273 09:03:25 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.273 09:03:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.273 09:03:25 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.273 09:03:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:07.273 09:03:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:07.273 09:03:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:07.273 00:04:07.273 real 0m0.176s 00:04:07.273 user 0m0.097s 00:04:07.273 sys 0m0.032s 00:04:07.273 09:03:25 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.273 09:03:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.273 ************************************ 00:04:07.273 END TEST rpc_plugins 00:04:07.273 ************************************ 00:04:07.273 09:03:25 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:07.273 09:03:25 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.273 09:03:25 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.273 09:03:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.273 ************************************ 00:04:07.273 START TEST rpc_trace_cmd_test 00:04:07.273 ************************************ 00:04:07.273 09:03:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:07.273 09:03:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:07.273 09:03:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:07.273 09:03:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.273 09:03:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:07.273 09:03:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.273 09:03:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:07.273 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56839", 00:04:07.273 "tpoint_group_mask": "0x8", 00:04:07.273 "iscsi_conn": { 00:04:07.273 "mask": "0x2", 00:04:07.273 "tpoint_mask": "0x0" 00:04:07.273 }, 00:04:07.273 "scsi": { 00:04:07.273 "mask": "0x4", 00:04:07.273 "tpoint_mask": "0x0" 00:04:07.273 }, 00:04:07.273 "bdev": { 00:04:07.273 "mask": "0x8", 00:04:07.273 "tpoint_mask": "0xffffffffffffffff" 00:04:07.273 }, 00:04:07.273 "nvmf_rdma": { 00:04:07.273 "mask": "0x10", 00:04:07.273 "tpoint_mask": "0x0" 00:04:07.273 }, 00:04:07.273 "nvmf_tcp": { 00:04:07.273 "mask": "0x20", 00:04:07.273 "tpoint_mask": "0x0" 00:04:07.273 }, 00:04:07.273 "ftl": { 00:04:07.273 "mask": "0x40", 00:04:07.273 "tpoint_mask": "0x0" 00:04:07.273 }, 00:04:07.273 "blobfs": { 00:04:07.273 "mask": "0x80", 00:04:07.273 "tpoint_mask": "0x0" 00:04:07.273 }, 00:04:07.273 "dsa": { 00:04:07.273 "mask": "0x200", 00:04:07.273 "tpoint_mask": "0x0" 00:04:07.273 }, 00:04:07.273 "thread": { 00:04:07.273 "mask": "0x400", 00:04:07.273 "tpoint_mask": "0x0" 00:04:07.273 }, 00:04:07.273 "nvme_pcie": { 00:04:07.273 "mask": "0x800", 00:04:07.273 "tpoint_mask": "0x0" 00:04:07.273 }, 00:04:07.273 "iaa": { 00:04:07.273 "mask": "0x1000", 00:04:07.273 "tpoint_mask": "0x0" 00:04:07.273 }, 00:04:07.273 "nvme_tcp": { 00:04:07.273 "mask": "0x2000", 00:04:07.273 "tpoint_mask": "0x0" 00:04:07.273 }, 00:04:07.273 "bdev_nvme": { 00:04:07.273 "mask": "0x4000", 00:04:07.273 "tpoint_mask": "0x0" 00:04:07.273 }, 00:04:07.273 "sock": { 00:04:07.273 "mask": "0x8000", 00:04:07.273 "tpoint_mask": "0x0" 00:04:07.273 }, 00:04:07.273 "blob": { 00:04:07.273 "mask": "0x10000", 00:04:07.273 "tpoint_mask": "0x0" 00:04:07.273 }, 00:04:07.273 "bdev_raid": { 00:04:07.273 "mask": "0x20000", 00:04:07.273 "tpoint_mask": "0x0" 00:04:07.273 }, 00:04:07.273 "scheduler": { 00:04:07.273 "mask": "0x40000", 00:04:07.273 "tpoint_mask": "0x0" 00:04:07.273 } 00:04:07.273 }' 00:04:07.273 09:03:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:07.531 09:03:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:07.531 09:03:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:07.531 09:03:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:07.531 09:03:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:07.531 09:03:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:07.531 09:03:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:07.531 09:03:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:07.531 09:03:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:07.531 09:03:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:07.531 00:04:07.531 real 0m0.231s 00:04:07.531 user 0m0.184s 00:04:07.531 sys 0m0.036s 00:04:07.531 09:03:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.531 09:03:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:07.531 ************************************ 00:04:07.531 END TEST rpc_trace_cmd_test 00:04:07.531 ************************************ 00:04:07.531 09:03:25 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:07.531 09:03:25 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:07.531 09:03:25 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:07.531 09:03:25 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.531 09:03:25 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.531 09:03:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.531 ************************************ 00:04:07.531 START TEST rpc_daemon_integrity 00:04:07.531 ************************************ 00:04:07.531 09:03:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:07.531 09:03:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:07.531 09:03:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.531 09:03:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.790 09:03:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.790 09:03:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:07.790 09:03:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:07.790 09:03:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:07.790 09:03:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:07.790 09:03:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.790 09:03:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.790 09:03:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.790 09:03:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:07.790 09:03:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:07.790 09:03:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.790 09:03:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.790 09:03:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.790 09:03:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:07.790 { 00:04:07.790 "name": "Malloc2", 00:04:07.790 "aliases": [ 00:04:07.790 "c21fcf85-4d75-4563-aed5-8037dfce87e0" 00:04:07.790 ], 00:04:07.790 "product_name": "Malloc disk", 00:04:07.790 "block_size": 512, 00:04:07.790 "num_blocks": 16384, 00:04:07.790 "uuid": "c21fcf85-4d75-4563-aed5-8037dfce87e0", 00:04:07.790 "assigned_rate_limits": { 00:04:07.790 "rw_ios_per_sec": 0, 00:04:07.790 "rw_mbytes_per_sec": 0, 00:04:07.790 "r_mbytes_per_sec": 0, 00:04:07.790 "w_mbytes_per_sec": 0 00:04:07.790 }, 00:04:07.790 "claimed": false, 00:04:07.790 "zoned": false, 00:04:07.790 "supported_io_types": { 00:04:07.790 "read": true, 00:04:07.790 "write": true, 00:04:07.790 "unmap": true, 00:04:07.790 "flush": true, 00:04:07.790 "reset": true, 00:04:07.791 "nvme_admin": false, 00:04:07.791 "nvme_io": false, 00:04:07.791 "nvme_io_md": false, 00:04:07.791 "write_zeroes": true, 00:04:07.791 "zcopy": true, 00:04:07.791 "get_zone_info": false, 00:04:07.791 "zone_management": false, 00:04:07.791 "zone_append": false, 00:04:07.791 "compare": false, 00:04:07.791 "compare_and_write": false, 00:04:07.791 "abort": true, 00:04:07.791 "seek_hole": false, 00:04:07.791 "seek_data": false, 00:04:07.791 "copy": true, 00:04:07.791 "nvme_iov_md": false 00:04:07.791 }, 00:04:07.791 "memory_domains": [ 00:04:07.791 { 00:04:07.791 "dma_device_id": "system", 00:04:07.791 "dma_device_type": 1 00:04:07.791 }, 00:04:07.791 { 00:04:07.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.791 "dma_device_type": 2 00:04:07.791 } 00:04:07.791 ], 00:04:07.791 "driver_specific": {} 00:04:07.791 } 00:04:07.791 ]' 00:04:07.791 09:03:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:07.791 09:03:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:07.791 09:03:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:07.791 09:03:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.791 09:03:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.791 [2024-10-15 09:03:25.572343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:07.791 [2024-10-15 09:03:25.572414] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:07.791 [2024-10-15 09:03:25.572436] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:07.791 [2024-10-15 09:03:25.572447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:07.791 [2024-10-15 09:03:25.574789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:07.791 [2024-10-15 09:03:25.574830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:07.791 Passthru0 00:04:07.791 09:03:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.791 09:03:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:07.791 09:03:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.791 09:03:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.791 09:03:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.791 09:03:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:07.791 { 00:04:07.791 "name": "Malloc2", 00:04:07.791 "aliases": [ 00:04:07.791 "c21fcf85-4d75-4563-aed5-8037dfce87e0" 00:04:07.791 ], 00:04:07.791 "product_name": "Malloc disk", 00:04:07.791 "block_size": 512, 00:04:07.791 "num_blocks": 16384, 00:04:07.791 "uuid": "c21fcf85-4d75-4563-aed5-8037dfce87e0", 00:04:07.791 "assigned_rate_limits": { 00:04:07.791 "rw_ios_per_sec": 0, 00:04:07.791 "rw_mbytes_per_sec": 0, 00:04:07.791 "r_mbytes_per_sec": 0, 00:04:07.791 "w_mbytes_per_sec": 0 00:04:07.791 }, 00:04:07.791 "claimed": true, 00:04:07.791 "claim_type": "exclusive_write", 00:04:07.791 "zoned": false, 00:04:07.791 "supported_io_types": { 00:04:07.791 "read": true, 00:04:07.791 "write": true, 00:04:07.791 "unmap": true, 00:04:07.791 "flush": true, 00:04:07.791 "reset": true, 00:04:07.791 "nvme_admin": false, 00:04:07.791 "nvme_io": false, 00:04:07.791 "nvme_io_md": false, 00:04:07.791 "write_zeroes": true, 00:04:07.791 "zcopy": true, 00:04:07.791 "get_zone_info": false, 00:04:07.791 "zone_management": false, 00:04:07.791 "zone_append": false, 00:04:07.791 "compare": false, 00:04:07.791 "compare_and_write": false, 00:04:07.791 "abort": true, 00:04:07.791 "seek_hole": false, 00:04:07.791 "seek_data": false, 00:04:07.791 "copy": true, 00:04:07.791 "nvme_iov_md": false 00:04:07.791 }, 00:04:07.791 "memory_domains": [ 00:04:07.791 { 00:04:07.791 "dma_device_id": "system", 00:04:07.791 "dma_device_type": 1 00:04:07.791 }, 00:04:07.791 { 00:04:07.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.791 "dma_device_type": 2 00:04:07.791 } 00:04:07.791 ], 00:04:07.791 "driver_specific": {} 00:04:07.791 }, 00:04:07.791 { 00:04:07.791 "name": "Passthru0", 00:04:07.791 "aliases": [ 00:04:07.791 "0616d17b-12e6-55a0-a103-c67a1b2767ba" 00:04:07.791 ], 00:04:07.791 "product_name": "passthru", 00:04:07.791 "block_size": 512, 00:04:07.791 "num_blocks": 16384, 00:04:07.791 "uuid": "0616d17b-12e6-55a0-a103-c67a1b2767ba", 00:04:07.791 "assigned_rate_limits": { 00:04:07.791 "rw_ios_per_sec": 0, 00:04:07.791 "rw_mbytes_per_sec": 0, 00:04:07.791 "r_mbytes_per_sec": 0, 00:04:07.791 "w_mbytes_per_sec": 0 00:04:07.791 }, 00:04:07.791 "claimed": false, 00:04:07.791 "zoned": false, 00:04:07.791 "supported_io_types": { 00:04:07.791 "read": true, 00:04:07.791 "write": true, 00:04:07.791 "unmap": true, 00:04:07.791 "flush": true, 00:04:07.791 "reset": true, 00:04:07.791 "nvme_admin": false, 00:04:07.791 "nvme_io": false, 00:04:07.791 "nvme_io_md": false, 00:04:07.791 "write_zeroes": true, 00:04:07.791 "zcopy": true, 00:04:07.791 "get_zone_info": false, 00:04:07.791 "zone_management": false, 00:04:07.791 "zone_append": false, 00:04:07.791 "compare": false, 00:04:07.791 "compare_and_write": false, 00:04:07.791 "abort": true, 00:04:07.791 "seek_hole": false, 00:04:07.791 "seek_data": false, 00:04:07.791 "copy": true, 00:04:07.791 "nvme_iov_md": false 00:04:07.791 }, 00:04:07.791 "memory_domains": [ 00:04:07.791 { 00:04:07.791 "dma_device_id": "system", 00:04:07.791 "dma_device_type": 1 00:04:07.791 }, 00:04:07.791 { 00:04:07.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.791 "dma_device_type": 2 00:04:07.791 } 00:04:07.791 ], 00:04:07.791 "driver_specific": { 00:04:07.791 "passthru": { 00:04:07.791 "name": "Passthru0", 00:04:07.791 "base_bdev_name": "Malloc2" 00:04:07.791 } 00:04:07.791 } 00:04:07.791 } 00:04:07.791 ]' 00:04:07.791 09:03:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:07.791 09:03:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:07.791 09:03:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:07.791 09:03:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.791 09:03:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.791 09:03:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.791 09:03:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:07.791 09:03:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.791 09:03:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.050 09:03:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.050 09:03:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:08.050 09:03:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.050 09:03:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.050 09:03:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.050 09:03:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:08.050 09:03:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:08.050 09:03:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:08.050 00:04:08.050 real 0m0.331s 00:04:08.050 user 0m0.170s 00:04:08.050 sys 0m0.061s 00:04:08.050 09:03:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.050 09:03:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.050 ************************************ 00:04:08.050 END TEST rpc_daemon_integrity 00:04:08.050 ************************************ 00:04:08.050 09:03:25 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:08.050 09:03:25 rpc -- rpc/rpc.sh@84 -- # killprocess 56839 00:04:08.050 09:03:25 rpc -- common/autotest_common.sh@950 -- # '[' -z 56839 ']' 00:04:08.050 09:03:25 rpc -- common/autotest_common.sh@954 -- # kill -0 56839 00:04:08.050 09:03:25 rpc -- common/autotest_common.sh@955 -- # uname 00:04:08.050 09:03:25 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:08.050 09:03:25 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56839 00:04:08.050 09:03:25 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:08.050 09:03:25 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:08.050 09:03:25 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56839' 00:04:08.050 killing process with pid 56839 00:04:08.050 09:03:25 rpc -- common/autotest_common.sh@969 -- # kill 56839 00:04:08.050 09:03:25 rpc -- common/autotest_common.sh@974 -- # wait 56839 00:04:10.583 00:04:10.583 real 0m5.303s 00:04:10.583 user 0m5.798s 00:04:10.583 sys 0m0.949s 00:04:10.583 09:03:28 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:10.583 09:03:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.583 ************************************ 00:04:10.583 END TEST rpc 00:04:10.583 ************************************ 00:04:10.583 09:03:28 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:10.583 09:03:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:10.583 09:03:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:10.583 09:03:28 -- common/autotest_common.sh@10 -- # set +x 00:04:10.583 ************************************ 00:04:10.583 START TEST skip_rpc 00:04:10.583 ************************************ 00:04:10.583 09:03:28 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:10.583 * Looking for test storage... 00:04:10.583 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:10.583 09:03:28 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:10.583 09:03:28 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:10.583 09:03:28 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:10.852 09:03:28 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:10.852 09:03:28 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:10.852 09:03:28 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:10.852 09:03:28 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:10.852 09:03:28 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:10.852 09:03:28 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:10.852 09:03:28 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:10.852 09:03:28 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:10.852 09:03:28 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:10.852 09:03:28 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:10.852 09:03:28 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:10.852 09:03:28 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:10.852 09:03:28 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:10.852 09:03:28 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:10.852 09:03:28 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:10.852 09:03:28 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:10.852 09:03:28 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:10.852 09:03:28 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:10.852 09:03:28 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:10.852 09:03:28 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:10.852 09:03:28 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:10.852 09:03:28 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:10.852 09:03:28 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:10.852 09:03:28 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:10.852 09:03:28 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:10.852 09:03:28 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:10.852 09:03:28 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:10.852 09:03:28 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:10.852 09:03:28 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:10.852 09:03:28 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:10.852 09:03:28 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:10.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.852 --rc genhtml_branch_coverage=1 00:04:10.852 --rc genhtml_function_coverage=1 00:04:10.852 --rc genhtml_legend=1 00:04:10.852 --rc geninfo_all_blocks=1 00:04:10.852 --rc geninfo_unexecuted_blocks=1 00:04:10.852 00:04:10.852 ' 00:04:10.852 09:03:28 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:10.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.852 --rc genhtml_branch_coverage=1 00:04:10.852 --rc genhtml_function_coverage=1 00:04:10.852 --rc genhtml_legend=1 00:04:10.852 --rc geninfo_all_blocks=1 00:04:10.852 --rc geninfo_unexecuted_blocks=1 00:04:10.852 00:04:10.852 ' 00:04:10.852 09:03:28 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:10.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.852 --rc genhtml_branch_coverage=1 00:04:10.852 --rc genhtml_function_coverage=1 00:04:10.852 --rc genhtml_legend=1 00:04:10.852 --rc geninfo_all_blocks=1 00:04:10.852 --rc geninfo_unexecuted_blocks=1 00:04:10.852 00:04:10.852 ' 00:04:10.852 09:03:28 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:10.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.852 --rc genhtml_branch_coverage=1 00:04:10.852 --rc genhtml_function_coverage=1 00:04:10.852 --rc genhtml_legend=1 00:04:10.852 --rc geninfo_all_blocks=1 00:04:10.852 --rc geninfo_unexecuted_blocks=1 00:04:10.852 00:04:10.852 ' 00:04:10.852 09:03:28 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:10.852 09:03:28 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:10.852 09:03:28 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:10.852 09:03:28 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:10.852 09:03:28 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:10.852 09:03:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.852 ************************************ 00:04:10.852 START TEST skip_rpc 00:04:10.852 ************************************ 00:04:10.852 09:03:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:10.852 09:03:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57068 00:04:10.852 09:03:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:10.852 09:03:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.852 09:03:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:10.852 [2024-10-15 09:03:28.694085] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:04:10.852 [2024-10-15 09:03:28.694231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57068 ] 00:04:11.131 [2024-10-15 09:03:28.870426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.131 [2024-10-15 09:03:28.988262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.400 09:03:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:16.400 09:03:33 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:16.400 09:03:33 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:16.400 09:03:33 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:16.400 09:03:33 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:16.400 09:03:33 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:16.400 09:03:33 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:16.400 09:03:33 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:16.400 09:03:33 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.400 09:03:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.400 09:03:33 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:16.400 09:03:33 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:16.400 09:03:33 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:16.400 09:03:33 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:16.400 09:03:33 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:16.400 09:03:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:16.400 09:03:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57068 00:04:16.400 09:03:33 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 57068 ']' 00:04:16.400 09:03:33 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 57068 00:04:16.400 09:03:33 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:16.400 09:03:33 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:16.400 09:03:33 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57068 00:04:16.400 09:03:33 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:16.400 09:03:33 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:16.400 09:03:33 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57068' 00:04:16.400 killing process with pid 57068 00:04:16.400 09:03:33 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 57068 00:04:16.400 09:03:33 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 57068 00:04:18.301 00:04:18.301 real 0m7.550s 00:04:18.301 user 0m7.063s 00:04:18.301 sys 0m0.403s 00:04:18.301 09:03:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:18.301 09:03:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.301 ************************************ 00:04:18.301 END TEST skip_rpc 00:04:18.301 ************************************ 00:04:18.301 09:03:36 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:18.301 09:03:36 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:18.301 09:03:36 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:18.301 09:03:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.301 ************************************ 00:04:18.301 START TEST skip_rpc_with_json 00:04:18.301 ************************************ 00:04:18.301 09:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:18.301 09:03:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:18.301 09:03:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57183 00:04:18.301 09:03:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:18.301 09:03:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.301 09:03:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57183 00:04:18.301 09:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57183 ']' 00:04:18.301 09:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.301 09:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:18.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.301 09:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.301 09:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:18.301 09:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:18.560 [2024-10-15 09:03:36.286594] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:04:18.560 [2024-10-15 09:03:36.286726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57183 ] 00:04:18.560 [2024-10-15 09:03:36.451038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.819 [2024-10-15 09:03:36.569456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.754 09:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:19.754 09:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:19.754 09:03:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:19.754 09:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.754 09:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:19.754 [2024-10-15 09:03:37.437971] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:19.754 request: 00:04:19.754 { 00:04:19.754 "trtype": "tcp", 00:04:19.754 "method": "nvmf_get_transports", 00:04:19.754 "req_id": 1 00:04:19.754 } 00:04:19.754 Got JSON-RPC error response 00:04:19.754 response: 00:04:19.754 { 00:04:19.754 "code": -19, 00:04:19.754 "message": "No such device" 00:04:19.754 } 00:04:19.754 09:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:19.754 09:03:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:19.754 09:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.754 09:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:19.754 [2024-10-15 09:03:37.450057] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:19.754 09:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.754 09:03:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:19.754 09:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.754 09:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:19.754 09:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.754 09:03:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:19.754 { 00:04:19.754 "subsystems": [ 00:04:19.754 { 00:04:19.754 "subsystem": "fsdev", 00:04:19.754 "config": [ 00:04:19.754 { 00:04:19.754 "method": "fsdev_set_opts", 00:04:19.754 "params": { 00:04:19.754 "fsdev_io_pool_size": 65535, 00:04:19.754 "fsdev_io_cache_size": 256 00:04:19.754 } 00:04:19.754 } 00:04:19.754 ] 00:04:19.754 }, 00:04:19.754 { 00:04:19.754 "subsystem": "keyring", 00:04:19.754 "config": [] 00:04:19.754 }, 00:04:19.754 { 00:04:19.754 "subsystem": "iobuf", 00:04:19.754 "config": [ 00:04:19.754 { 00:04:19.754 "method": "iobuf_set_options", 00:04:19.754 "params": { 00:04:19.754 "small_pool_count": 8192, 00:04:19.754 "large_pool_count": 1024, 00:04:19.754 "small_bufsize": 8192, 00:04:19.754 "large_bufsize": 135168 00:04:19.754 } 00:04:19.754 } 00:04:19.754 ] 00:04:19.754 }, 00:04:19.754 { 00:04:19.754 "subsystem": "sock", 00:04:19.754 "config": [ 00:04:19.754 { 00:04:19.754 "method": "sock_set_default_impl", 00:04:19.754 "params": { 00:04:19.754 "impl_name": "posix" 00:04:19.754 } 00:04:19.754 }, 00:04:19.754 { 00:04:19.754 "method": "sock_impl_set_options", 00:04:19.754 "params": { 00:04:19.754 "impl_name": "ssl", 00:04:19.754 "recv_buf_size": 4096, 00:04:19.754 "send_buf_size": 4096, 00:04:19.754 "enable_recv_pipe": true, 00:04:19.754 "enable_quickack": false, 00:04:19.754 "enable_placement_id": 0, 00:04:19.754 "enable_zerocopy_send_server": true, 00:04:19.754 "enable_zerocopy_send_client": false, 00:04:19.754 "zerocopy_threshold": 0, 00:04:19.754 "tls_version": 0, 00:04:19.754 "enable_ktls": false 00:04:19.754 } 00:04:19.754 }, 00:04:19.754 { 00:04:19.754 "method": "sock_impl_set_options", 00:04:19.754 "params": { 00:04:19.754 "impl_name": "posix", 00:04:19.754 "recv_buf_size": 2097152, 00:04:19.754 "send_buf_size": 2097152, 00:04:19.754 "enable_recv_pipe": true, 00:04:19.754 "enable_quickack": false, 00:04:19.754 "enable_placement_id": 0, 00:04:19.754 "enable_zerocopy_send_server": true, 00:04:19.754 "enable_zerocopy_send_client": false, 00:04:19.754 "zerocopy_threshold": 0, 00:04:19.754 "tls_version": 0, 00:04:19.754 "enable_ktls": false 00:04:19.755 } 00:04:19.755 } 00:04:19.755 ] 00:04:19.755 }, 00:04:19.755 { 00:04:19.755 "subsystem": "vmd", 00:04:19.755 "config": [] 00:04:19.755 }, 00:04:19.755 { 00:04:19.755 "subsystem": "accel", 00:04:19.755 "config": [ 00:04:19.755 { 00:04:19.755 "method": "accel_set_options", 00:04:19.755 "params": { 00:04:19.755 "small_cache_size": 128, 00:04:19.755 "large_cache_size": 16, 00:04:19.755 "task_count": 2048, 00:04:19.755 "sequence_count": 2048, 00:04:19.755 "buf_count": 2048 00:04:19.755 } 00:04:19.755 } 00:04:19.755 ] 00:04:19.755 }, 00:04:19.755 { 00:04:19.755 "subsystem": "bdev", 00:04:19.755 "config": [ 00:04:19.755 { 00:04:19.755 "method": "bdev_set_options", 00:04:19.755 "params": { 00:04:19.755 "bdev_io_pool_size": 65535, 00:04:19.755 "bdev_io_cache_size": 256, 00:04:19.755 "bdev_auto_examine": true, 00:04:19.755 "iobuf_small_cache_size": 128, 00:04:19.755 "iobuf_large_cache_size": 16 00:04:19.755 } 00:04:19.755 }, 00:04:19.755 { 00:04:19.755 "method": "bdev_raid_set_options", 00:04:19.755 "params": { 00:04:19.755 "process_window_size_kb": 1024, 00:04:19.755 "process_max_bandwidth_mb_sec": 0 00:04:19.755 } 00:04:19.755 }, 00:04:19.755 { 00:04:19.755 "method": "bdev_iscsi_set_options", 00:04:19.755 "params": { 00:04:19.755 "timeout_sec": 30 00:04:19.755 } 00:04:19.755 }, 00:04:19.755 { 00:04:19.755 "method": "bdev_nvme_set_options", 00:04:19.755 "params": { 00:04:19.755 "action_on_timeout": "none", 00:04:19.755 "timeout_us": 0, 00:04:19.755 "timeout_admin_us": 0, 00:04:19.755 "keep_alive_timeout_ms": 10000, 00:04:19.755 "arbitration_burst": 0, 00:04:19.755 "low_priority_weight": 0, 00:04:19.755 "medium_priority_weight": 0, 00:04:19.755 "high_priority_weight": 0, 00:04:19.755 "nvme_adminq_poll_period_us": 10000, 00:04:19.755 "nvme_ioq_poll_period_us": 0, 00:04:19.755 "io_queue_requests": 0, 00:04:19.755 "delay_cmd_submit": true, 00:04:19.755 "transport_retry_count": 4, 00:04:19.755 "bdev_retry_count": 3, 00:04:19.755 "transport_ack_timeout": 0, 00:04:19.755 "ctrlr_loss_timeout_sec": 0, 00:04:19.755 "reconnect_delay_sec": 0, 00:04:19.755 "fast_io_fail_timeout_sec": 0, 00:04:19.755 "disable_auto_failback": false, 00:04:19.755 "generate_uuids": false, 00:04:19.755 "transport_tos": 0, 00:04:19.755 "nvme_error_stat": false, 00:04:19.755 "rdma_srq_size": 0, 00:04:19.755 "io_path_stat": false, 00:04:19.755 "allow_accel_sequence": false, 00:04:19.755 "rdma_max_cq_size": 0, 00:04:19.755 "rdma_cm_event_timeout_ms": 0, 00:04:19.755 "dhchap_digests": [ 00:04:19.755 "sha256", 00:04:19.755 "sha384", 00:04:19.755 "sha512" 00:04:19.755 ], 00:04:19.755 "dhchap_dhgroups": [ 00:04:19.755 "null", 00:04:19.755 "ffdhe2048", 00:04:19.755 "ffdhe3072", 00:04:19.755 "ffdhe4096", 00:04:19.755 "ffdhe6144", 00:04:19.755 "ffdhe8192" 00:04:19.755 ] 00:04:19.755 } 00:04:19.755 }, 00:04:19.755 { 00:04:19.755 "method": "bdev_nvme_set_hotplug", 00:04:19.755 "params": { 00:04:19.755 "period_us": 100000, 00:04:19.755 "enable": false 00:04:19.755 } 00:04:19.755 }, 00:04:19.755 { 00:04:19.755 "method": "bdev_wait_for_examine" 00:04:19.755 } 00:04:19.755 ] 00:04:19.755 }, 00:04:19.755 { 00:04:19.755 "subsystem": "scsi", 00:04:19.755 "config": null 00:04:19.755 }, 00:04:19.755 { 00:04:19.755 "subsystem": "scheduler", 00:04:19.755 "config": [ 00:04:19.755 { 00:04:19.755 "method": "framework_set_scheduler", 00:04:19.755 "params": { 00:04:19.755 "name": "static" 00:04:19.755 } 00:04:19.755 } 00:04:19.755 ] 00:04:19.755 }, 00:04:19.755 { 00:04:19.755 "subsystem": "vhost_scsi", 00:04:19.755 "config": [] 00:04:19.755 }, 00:04:19.755 { 00:04:19.755 "subsystem": "vhost_blk", 00:04:19.755 "config": [] 00:04:19.755 }, 00:04:19.755 { 00:04:19.755 "subsystem": "ublk", 00:04:19.755 "config": [] 00:04:19.755 }, 00:04:19.755 { 00:04:19.755 "subsystem": "nbd", 00:04:19.755 "config": [] 00:04:19.755 }, 00:04:19.755 { 00:04:19.755 "subsystem": "nvmf", 00:04:19.755 "config": [ 00:04:19.755 { 00:04:19.755 "method": "nvmf_set_config", 00:04:19.755 "params": { 00:04:19.755 "discovery_filter": "match_any", 00:04:19.755 "admin_cmd_passthru": { 00:04:19.755 "identify_ctrlr": false 00:04:19.755 }, 00:04:19.755 "dhchap_digests": [ 00:04:19.755 "sha256", 00:04:19.755 "sha384", 00:04:19.755 "sha512" 00:04:19.755 ], 00:04:19.755 "dhchap_dhgroups": [ 00:04:19.755 "null", 00:04:19.755 "ffdhe2048", 00:04:19.755 "ffdhe3072", 00:04:19.755 "ffdhe4096", 00:04:19.755 "ffdhe6144", 00:04:19.755 "ffdhe8192" 00:04:19.755 ] 00:04:19.755 } 00:04:19.755 }, 00:04:19.755 { 00:04:19.755 "method": "nvmf_set_max_subsystems", 00:04:19.755 "params": { 00:04:19.755 "max_subsystems": 1024 00:04:19.755 } 00:04:19.755 }, 00:04:19.755 { 00:04:19.755 "method": "nvmf_set_crdt", 00:04:19.755 "params": { 00:04:19.755 "crdt1": 0, 00:04:19.755 "crdt2": 0, 00:04:19.755 "crdt3": 0 00:04:19.755 } 00:04:19.755 }, 00:04:19.755 { 00:04:19.755 "method": "nvmf_create_transport", 00:04:19.755 "params": { 00:04:19.755 "trtype": "TCP", 00:04:19.755 "max_queue_depth": 128, 00:04:19.755 "max_io_qpairs_per_ctrlr": 127, 00:04:19.755 "in_capsule_data_size": 4096, 00:04:19.755 "max_io_size": 131072, 00:04:19.755 "io_unit_size": 131072, 00:04:19.755 "max_aq_depth": 128, 00:04:19.755 "num_shared_buffers": 511, 00:04:19.755 "buf_cache_size": 4294967295, 00:04:19.755 "dif_insert_or_strip": false, 00:04:19.755 "zcopy": false, 00:04:19.755 "c2h_success": true, 00:04:19.755 "sock_priority": 0, 00:04:19.755 "abort_timeout_sec": 1, 00:04:19.755 "ack_timeout": 0, 00:04:19.755 "data_wr_pool_size": 0 00:04:19.755 } 00:04:19.755 } 00:04:19.755 ] 00:04:19.755 }, 00:04:19.755 { 00:04:19.755 "subsystem": "iscsi", 00:04:19.755 "config": [ 00:04:19.755 { 00:04:19.755 "method": "iscsi_set_options", 00:04:19.755 "params": { 00:04:19.755 "node_base": "iqn.2016-06.io.spdk", 00:04:19.755 "max_sessions": 128, 00:04:19.755 "max_connections_per_session": 2, 00:04:19.755 "max_queue_depth": 64, 00:04:19.755 "default_time2wait": 2, 00:04:19.755 "default_time2retain": 20, 00:04:19.755 "first_burst_length": 8192, 00:04:19.755 "immediate_data": true, 00:04:19.755 "allow_duplicated_isid": false, 00:04:19.755 "error_recovery_level": 0, 00:04:19.755 "nop_timeout": 60, 00:04:19.755 "nop_in_interval": 30, 00:04:19.755 "disable_chap": false, 00:04:19.755 "require_chap": false, 00:04:19.755 "mutual_chap": false, 00:04:19.755 "chap_group": 0, 00:04:19.755 "max_large_datain_per_connection": 64, 00:04:19.755 "max_r2t_per_connection": 4, 00:04:19.755 "pdu_pool_size": 36864, 00:04:19.755 "immediate_data_pool_size": 16384, 00:04:19.755 "data_out_pool_size": 2048 00:04:19.755 } 00:04:19.755 } 00:04:19.755 ] 00:04:19.755 } 00:04:19.755 ] 00:04:19.755 } 00:04:19.755 09:03:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:19.755 09:03:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57183 00:04:19.755 09:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57183 ']' 00:04:19.755 09:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57183 00:04:19.755 09:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:19.755 09:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:19.755 09:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57183 00:04:20.012 09:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:20.012 09:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:20.012 killing process with pid 57183 00:04:20.012 09:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57183' 00:04:20.012 09:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57183 00:04:20.012 09:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57183 00:04:22.563 09:03:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57228 00:04:22.563 09:03:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:22.563 09:03:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:27.842 09:03:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57228 00:04:27.842 09:03:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57228 ']' 00:04:27.842 09:03:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57228 00:04:27.842 09:03:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:27.842 09:03:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:27.842 09:03:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57228 00:04:27.842 09:03:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:27.842 09:03:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:27.842 killing process with pid 57228 00:04:27.842 09:03:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57228' 00:04:27.842 09:03:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57228 00:04:27.842 09:03:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57228 00:04:29.749 09:03:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:29.749 09:03:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:29.749 00:04:29.749 real 0m11.427s 00:04:29.749 user 0m10.878s 00:04:29.749 sys 0m0.862s 00:04:29.749 09:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.749 ************************************ 00:04:29.749 END TEST skip_rpc_with_json 00:04:29.749 ************************************ 00:04:29.749 09:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.008 09:03:47 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:30.009 09:03:47 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.009 09:03:47 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.009 09:03:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.009 ************************************ 00:04:30.009 START TEST skip_rpc_with_delay 00:04:30.009 ************************************ 00:04:30.009 09:03:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:30.009 09:03:47 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:30.009 09:03:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:30.009 09:03:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:30.009 09:03:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:30.009 09:03:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:30.009 09:03:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:30.009 09:03:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:30.009 09:03:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:30.009 09:03:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:30.009 09:03:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:30.009 09:03:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:30.009 09:03:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:30.009 [2024-10-15 09:03:47.775070] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:30.009 09:03:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:30.009 09:03:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:30.009 09:03:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:30.009 09:03:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:30.009 00:04:30.009 real 0m0.166s 00:04:30.009 user 0m0.094s 00:04:30.009 sys 0m0.069s 00:04:30.009 09:03:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:30.009 09:03:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:30.009 ************************************ 00:04:30.009 END TEST skip_rpc_with_delay 00:04:30.009 ************************************ 00:04:30.009 09:03:47 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:30.009 09:03:47 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:30.009 09:03:47 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:30.009 09:03:47 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.009 09:03:47 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.009 09:03:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.267 ************************************ 00:04:30.267 START TEST exit_on_failed_rpc_init 00:04:30.267 ************************************ 00:04:30.267 09:03:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:30.267 09:03:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57367 00:04:30.267 09:03:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:30.267 09:03:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57367 00:04:30.267 09:03:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57367 ']' 00:04:30.267 09:03:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.267 09:03:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:30.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.267 09:03:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.267 09:03:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:30.267 09:03:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:30.267 [2024-10-15 09:03:48.011884] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:04:30.267 [2024-10-15 09:03:48.012005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57367 ] 00:04:30.526 [2024-10-15 09:03:48.175764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.526 [2024-10-15 09:03:48.289956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.465 09:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:31.465 09:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:31.465 09:03:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:31.465 09:03:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:31.465 09:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:31.465 09:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:31.465 09:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:31.465 09:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:31.465 09:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:31.465 09:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:31.465 09:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:31.465 09:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:31.465 09:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:31.465 09:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:31.466 09:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:31.466 [2024-10-15 09:03:49.281928] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:04:31.466 [2024-10-15 09:03:49.282082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57385 ] 00:04:31.725 [2024-10-15 09:03:49.443851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.725 [2024-10-15 09:03:49.560996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.725 [2024-10-15 09:03:49.561085] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:31.725 [2024-10-15 09:03:49.561099] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:31.725 [2024-10-15 09:03:49.561110] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:31.985 09:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:31.985 09:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:31.985 09:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:31.985 09:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:31.985 09:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:31.985 09:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:31.985 09:03:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:31.985 09:03:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57367 00:04:31.985 09:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57367 ']' 00:04:31.985 09:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57367 00:04:31.985 09:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:31.985 09:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:31.985 09:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57367 00:04:31.985 09:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:31.985 09:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:31.985 09:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57367' 00:04:31.985 killing process with pid 57367 00:04:31.985 09:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57367 00:04:31.985 09:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57367 00:04:34.529 00:04:34.529 real 0m4.424s 00:04:34.529 user 0m4.782s 00:04:34.529 sys 0m0.615s 00:04:34.529 09:03:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.529 ************************************ 00:04:34.529 09:03:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:34.529 END TEST exit_on_failed_rpc_init 00:04:34.529 ************************************ 00:04:34.529 09:03:52 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:34.529 00:04:34.529 real 0m24.064s 00:04:34.529 user 0m23.050s 00:04:34.529 sys 0m2.236s 00:04:34.529 09:03:52 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.529 09:03:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.529 ************************************ 00:04:34.529 END TEST skip_rpc 00:04:34.529 ************************************ 00:04:34.789 09:03:52 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:34.789 09:03:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.789 09:03:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.789 09:03:52 -- common/autotest_common.sh@10 -- # set +x 00:04:34.790 ************************************ 00:04:34.790 START TEST rpc_client 00:04:34.790 ************************************ 00:04:34.790 09:03:52 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:34.790 * Looking for test storage... 00:04:34.790 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:34.790 09:03:52 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:34.790 09:03:52 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:34.790 09:03:52 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:34.790 09:03:52 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:34.790 09:03:52 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.790 09:03:52 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.790 09:03:52 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.790 09:03:52 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.790 09:03:52 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.790 09:03:52 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.790 09:03:52 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.790 09:03:52 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.790 09:03:52 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.790 09:03:52 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.790 09:03:52 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.790 09:03:52 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:34.790 09:03:52 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:34.790 09:03:52 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.790 09:03:52 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.790 09:03:52 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:34.790 09:03:52 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:34.790 09:03:52 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.050 09:03:52 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:35.050 09:03:52 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.050 09:03:52 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:35.050 09:03:52 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:35.050 09:03:52 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.050 09:03:52 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:35.050 09:03:52 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.050 09:03:52 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.050 09:03:52 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.050 09:03:52 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:35.050 09:03:52 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.050 09:03:52 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:35.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.050 --rc genhtml_branch_coverage=1 00:04:35.050 --rc genhtml_function_coverage=1 00:04:35.050 --rc genhtml_legend=1 00:04:35.050 --rc geninfo_all_blocks=1 00:04:35.050 --rc geninfo_unexecuted_blocks=1 00:04:35.050 00:04:35.050 ' 00:04:35.050 09:03:52 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:35.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.050 --rc genhtml_branch_coverage=1 00:04:35.050 --rc genhtml_function_coverage=1 00:04:35.050 --rc genhtml_legend=1 00:04:35.050 --rc geninfo_all_blocks=1 00:04:35.050 --rc geninfo_unexecuted_blocks=1 00:04:35.050 00:04:35.050 ' 00:04:35.050 09:03:52 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:35.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.050 --rc genhtml_branch_coverage=1 00:04:35.050 --rc genhtml_function_coverage=1 00:04:35.050 --rc genhtml_legend=1 00:04:35.050 --rc geninfo_all_blocks=1 00:04:35.050 --rc geninfo_unexecuted_blocks=1 00:04:35.050 00:04:35.050 ' 00:04:35.050 09:03:52 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:35.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.050 --rc genhtml_branch_coverage=1 00:04:35.050 --rc genhtml_function_coverage=1 00:04:35.050 --rc genhtml_legend=1 00:04:35.050 --rc geninfo_all_blocks=1 00:04:35.050 --rc geninfo_unexecuted_blocks=1 00:04:35.050 00:04:35.050 ' 00:04:35.050 09:03:52 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:35.050 OK 00:04:35.050 09:03:52 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:35.050 ************************************ 00:04:35.050 END TEST rpc_client 00:04:35.050 ************************************ 00:04:35.050 00:04:35.050 real 0m0.302s 00:04:35.050 user 0m0.162s 00:04:35.050 sys 0m0.159s 00:04:35.050 09:03:52 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.050 09:03:52 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:35.050 09:03:52 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:35.050 09:03:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.050 09:03:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.051 09:03:52 -- common/autotest_common.sh@10 -- # set +x 00:04:35.051 ************************************ 00:04:35.051 START TEST json_config 00:04:35.051 ************************************ 00:04:35.051 09:03:52 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:35.051 09:03:52 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:35.051 09:03:52 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:35.051 09:03:52 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:35.311 09:03:53 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:35.311 09:03:53 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.311 09:03:53 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.311 09:03:53 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.311 09:03:53 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.311 09:03:53 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.311 09:03:53 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.311 09:03:53 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.311 09:03:53 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.311 09:03:53 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.311 09:03:53 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.311 09:03:53 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.311 09:03:53 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:35.311 09:03:53 json_config -- scripts/common.sh@345 -- # : 1 00:04:35.312 09:03:53 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.312 09:03:53 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.312 09:03:53 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:35.312 09:03:53 json_config -- scripts/common.sh@353 -- # local d=1 00:04:35.312 09:03:53 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.312 09:03:53 json_config -- scripts/common.sh@355 -- # echo 1 00:04:35.312 09:03:53 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.312 09:03:53 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:35.312 09:03:53 json_config -- scripts/common.sh@353 -- # local d=2 00:04:35.312 09:03:53 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.312 09:03:53 json_config -- scripts/common.sh@355 -- # echo 2 00:04:35.312 09:03:53 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.312 09:03:53 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.312 09:03:53 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.312 09:03:53 json_config -- scripts/common.sh@368 -- # return 0 00:04:35.312 09:03:53 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.312 09:03:53 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:35.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.312 --rc genhtml_branch_coverage=1 00:04:35.312 --rc genhtml_function_coverage=1 00:04:35.312 --rc genhtml_legend=1 00:04:35.312 --rc geninfo_all_blocks=1 00:04:35.312 --rc geninfo_unexecuted_blocks=1 00:04:35.312 00:04:35.312 ' 00:04:35.312 09:03:53 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:35.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.312 --rc genhtml_branch_coverage=1 00:04:35.312 --rc genhtml_function_coverage=1 00:04:35.312 --rc genhtml_legend=1 00:04:35.312 --rc geninfo_all_blocks=1 00:04:35.312 --rc geninfo_unexecuted_blocks=1 00:04:35.312 00:04:35.312 ' 00:04:35.312 09:03:53 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:35.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.312 --rc genhtml_branch_coverage=1 00:04:35.312 --rc genhtml_function_coverage=1 00:04:35.312 --rc genhtml_legend=1 00:04:35.312 --rc geninfo_all_blocks=1 00:04:35.312 --rc geninfo_unexecuted_blocks=1 00:04:35.312 00:04:35.312 ' 00:04:35.312 09:03:53 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:35.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.312 --rc genhtml_branch_coverage=1 00:04:35.312 --rc genhtml_function_coverage=1 00:04:35.312 --rc genhtml_legend=1 00:04:35.312 --rc geninfo_all_blocks=1 00:04:35.312 --rc geninfo_unexecuted_blocks=1 00:04:35.312 00:04:35.312 ' 00:04:35.312 09:03:53 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:35.312 09:03:53 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:35.312 09:03:53 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:35.312 09:03:53 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:35.312 09:03:53 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:35.312 09:03:53 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:35.312 09:03:53 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:35.312 09:03:53 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:35.312 09:03:53 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:35.312 09:03:53 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:35.312 09:03:53 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:35.312 09:03:53 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:35.312 09:03:53 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d8dac9db-f9af-4c2d-89de-4790b63e0fa6 00:04:35.312 09:03:53 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=d8dac9db-f9af-4c2d-89de-4790b63e0fa6 00:04:35.312 09:03:53 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:35.312 09:03:53 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:35.312 09:03:53 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:35.312 09:03:53 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:35.312 09:03:53 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:35.312 09:03:53 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:35.312 09:03:53 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:35.312 09:03:53 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:35.312 09:03:53 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:35.312 09:03:53 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.312 09:03:53 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.312 09:03:53 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.312 09:03:53 json_config -- paths/export.sh@5 -- # export PATH 00:04:35.312 09:03:53 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.312 09:03:53 json_config -- nvmf/common.sh@51 -- # : 0 00:04:35.312 09:03:53 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:35.312 09:03:53 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:35.312 09:03:53 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:35.312 09:03:53 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:35.312 09:03:53 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:35.312 09:03:53 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:35.312 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:35.312 09:03:53 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:35.312 09:03:53 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:35.312 09:03:53 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:35.312 09:03:53 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:35.312 09:03:53 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:35.312 09:03:53 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:35.312 09:03:53 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:35.312 09:03:53 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:35.312 WARNING: No tests are enabled so not running JSON configuration tests 00:04:35.312 09:03:53 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:35.312 09:03:53 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:35.312 00:04:35.312 real 0m0.235s 00:04:35.312 user 0m0.131s 00:04:35.312 sys 0m0.113s 00:04:35.312 09:03:53 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.312 09:03:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.312 ************************************ 00:04:35.312 END TEST json_config 00:04:35.312 ************************************ 00:04:35.312 09:03:53 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:35.312 09:03:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.312 09:03:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.312 09:03:53 -- common/autotest_common.sh@10 -- # set +x 00:04:35.312 ************************************ 00:04:35.312 START TEST json_config_extra_key 00:04:35.312 ************************************ 00:04:35.312 09:03:53 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:35.572 09:03:53 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:35.572 09:03:53 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:35.572 09:03:53 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:35.572 09:03:53 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:35.572 09:03:53 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.572 09:03:53 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.572 09:03:53 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.572 09:03:53 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.572 09:03:53 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.572 09:03:53 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.573 09:03:53 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.573 09:03:53 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.573 09:03:53 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.573 09:03:53 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.573 09:03:53 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.573 09:03:53 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:35.573 09:03:53 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:35.573 09:03:53 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.573 09:03:53 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.573 09:03:53 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:35.573 09:03:53 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:35.573 09:03:53 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.573 09:03:53 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:35.573 09:03:53 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.573 09:03:53 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:35.573 09:03:53 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:35.573 09:03:53 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.573 09:03:53 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:35.573 09:03:53 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.573 09:03:53 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.573 09:03:53 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.573 09:03:53 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:35.573 09:03:53 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.573 09:03:53 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:35.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.573 --rc genhtml_branch_coverage=1 00:04:35.573 --rc genhtml_function_coverage=1 00:04:35.573 --rc genhtml_legend=1 00:04:35.573 --rc geninfo_all_blocks=1 00:04:35.573 --rc geninfo_unexecuted_blocks=1 00:04:35.573 00:04:35.573 ' 00:04:35.573 09:03:53 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:35.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.573 --rc genhtml_branch_coverage=1 00:04:35.573 --rc genhtml_function_coverage=1 00:04:35.573 --rc genhtml_legend=1 00:04:35.573 --rc geninfo_all_blocks=1 00:04:35.573 --rc geninfo_unexecuted_blocks=1 00:04:35.573 00:04:35.573 ' 00:04:35.573 09:03:53 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:35.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.573 --rc genhtml_branch_coverage=1 00:04:35.573 --rc genhtml_function_coverage=1 00:04:35.573 --rc genhtml_legend=1 00:04:35.573 --rc geninfo_all_blocks=1 00:04:35.573 --rc geninfo_unexecuted_blocks=1 00:04:35.573 00:04:35.573 ' 00:04:35.573 09:03:53 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:35.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.573 --rc genhtml_branch_coverage=1 00:04:35.573 --rc genhtml_function_coverage=1 00:04:35.573 --rc genhtml_legend=1 00:04:35.573 --rc geninfo_all_blocks=1 00:04:35.573 --rc geninfo_unexecuted_blocks=1 00:04:35.573 00:04:35.573 ' 00:04:35.573 09:03:53 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:35.573 09:03:53 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:35.573 09:03:53 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:35.573 09:03:53 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:35.573 09:03:53 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:35.573 09:03:53 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:35.573 09:03:53 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:35.573 09:03:53 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:35.573 09:03:53 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:35.573 09:03:53 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:35.573 09:03:53 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:35.573 09:03:53 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:35.573 09:03:53 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d8dac9db-f9af-4c2d-89de-4790b63e0fa6 00:04:35.573 09:03:53 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=d8dac9db-f9af-4c2d-89de-4790b63e0fa6 00:04:35.573 09:03:53 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:35.573 09:03:53 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:35.573 09:03:53 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:35.573 09:03:53 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:35.573 09:03:53 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:35.573 09:03:53 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:35.573 09:03:53 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:35.573 09:03:53 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:35.573 09:03:53 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:35.573 09:03:53 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.573 09:03:53 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.573 09:03:53 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.573 09:03:53 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:35.573 09:03:53 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.573 09:03:53 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:35.573 09:03:53 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:35.573 09:03:53 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:35.573 09:03:53 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:35.573 09:03:53 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:35.573 09:03:53 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:35.573 09:03:53 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:35.573 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:35.573 09:03:53 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:35.573 09:03:53 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:35.573 09:03:53 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:35.573 09:03:53 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:35.573 09:03:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:35.573 09:03:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:35.573 09:03:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:35.573 09:03:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:35.573 09:03:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:35.573 09:03:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:35.573 09:03:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:35.573 09:03:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:35.573 09:03:53 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:35.573 INFO: launching applications... 00:04:35.573 09:03:53 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:35.573 09:03:53 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:35.573 09:03:53 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:35.573 09:03:53 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:35.574 09:03:53 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:35.574 09:03:53 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:35.574 09:03:53 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:35.574 09:03:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:35.574 09:03:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:35.574 09:03:53 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57595 00:04:35.574 Waiting for target to run... 00:04:35.574 09:03:53 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:35.574 09:03:53 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57595 /var/tmp/spdk_tgt.sock 00:04:35.574 09:03:53 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 57595 ']' 00:04:35.574 09:03:53 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:35.574 09:03:53 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:35.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:35.574 09:03:53 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:35.574 09:03:53 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:35.574 09:03:53 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:35.574 09:03:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:35.833 [2024-10-15 09:03:53.473639] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:04:35.833 [2024-10-15 09:03:53.473781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57595 ] 00:04:36.091 [2024-10-15 09:03:53.864627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.091 [2024-10-15 09:03:53.982578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.058 09:03:54 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:37.058 09:03:54 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:37.058 00:04:37.058 09:03:54 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:37.058 INFO: shutting down applications... 00:04:37.058 09:03:54 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:37.058 09:03:54 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:37.058 09:03:54 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:37.058 09:03:54 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:37.058 09:03:54 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57595 ]] 00:04:37.058 09:03:54 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57595 00:04:37.058 09:03:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:37.058 09:03:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:37.058 09:03:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57595 00:04:37.058 09:03:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:37.627 09:03:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:37.627 09:03:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:37.627 09:03:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57595 00:04:37.627 09:03:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:37.887 09:03:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:37.887 09:03:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:37.887 09:03:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57595 00:04:37.887 09:03:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:38.456 09:03:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:38.456 09:03:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:38.456 09:03:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57595 00:04:38.456 09:03:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:39.024 09:03:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:39.024 09:03:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:39.024 09:03:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57595 00:04:39.024 09:03:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:39.593 09:03:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:39.593 09:03:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:39.593 09:03:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57595 00:04:39.593 09:03:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:40.166 09:03:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:40.166 09:03:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:40.166 09:03:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57595 00:04:40.166 09:03:57 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:40.167 09:03:57 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:40.167 09:03:57 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:40.167 SPDK target shutdown done 00:04:40.167 09:03:57 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:40.167 Success 00:04:40.167 09:03:57 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:40.167 00:04:40.167 real 0m4.651s 00:04:40.167 user 0m4.387s 00:04:40.167 sys 0m0.603s 00:04:40.167 09:03:57 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.167 09:03:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:40.167 ************************************ 00:04:40.167 END TEST json_config_extra_key 00:04:40.167 ************************************ 00:04:40.167 09:03:57 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:40.167 09:03:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.167 09:03:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.167 09:03:57 -- common/autotest_common.sh@10 -- # set +x 00:04:40.167 ************************************ 00:04:40.167 START TEST alias_rpc 00:04:40.167 ************************************ 00:04:40.167 09:03:57 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:40.167 * Looking for test storage... 00:04:40.167 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:40.167 09:03:58 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:40.167 09:03:58 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:40.167 09:03:58 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:40.430 09:03:58 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:40.430 09:03:58 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.430 09:03:58 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.430 09:03:58 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.430 09:03:58 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.430 09:03:58 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.430 09:03:58 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.430 09:03:58 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.430 09:03:58 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.430 09:03:58 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.430 09:03:58 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.430 09:03:58 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.430 09:03:58 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:40.430 09:03:58 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:40.430 09:03:58 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.430 09:03:58 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.430 09:03:58 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:40.430 09:03:58 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:40.430 09:03:58 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.430 09:03:58 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:40.430 09:03:58 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.430 09:03:58 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:40.430 09:03:58 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:40.430 09:03:58 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.430 09:03:58 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:40.430 09:03:58 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.430 09:03:58 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.430 09:03:58 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.430 09:03:58 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:40.430 09:03:58 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.430 09:03:58 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:40.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.430 --rc genhtml_branch_coverage=1 00:04:40.430 --rc genhtml_function_coverage=1 00:04:40.430 --rc genhtml_legend=1 00:04:40.430 --rc geninfo_all_blocks=1 00:04:40.430 --rc geninfo_unexecuted_blocks=1 00:04:40.430 00:04:40.430 ' 00:04:40.430 09:03:58 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:40.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.430 --rc genhtml_branch_coverage=1 00:04:40.430 --rc genhtml_function_coverage=1 00:04:40.430 --rc genhtml_legend=1 00:04:40.430 --rc geninfo_all_blocks=1 00:04:40.430 --rc geninfo_unexecuted_blocks=1 00:04:40.430 00:04:40.430 ' 00:04:40.430 09:03:58 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:40.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.430 --rc genhtml_branch_coverage=1 00:04:40.430 --rc genhtml_function_coverage=1 00:04:40.430 --rc genhtml_legend=1 00:04:40.430 --rc geninfo_all_blocks=1 00:04:40.430 --rc geninfo_unexecuted_blocks=1 00:04:40.430 00:04:40.430 ' 00:04:40.430 09:03:58 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:40.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.430 --rc genhtml_branch_coverage=1 00:04:40.430 --rc genhtml_function_coverage=1 00:04:40.430 --rc genhtml_legend=1 00:04:40.430 --rc geninfo_all_blocks=1 00:04:40.430 --rc geninfo_unexecuted_blocks=1 00:04:40.430 00:04:40.430 ' 00:04:40.430 09:03:58 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:40.430 09:03:58 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57707 00:04:40.430 09:03:58 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.430 09:03:58 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57707 00:04:40.430 09:03:58 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 57707 ']' 00:04:40.430 09:03:58 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.430 09:03:58 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:40.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.430 09:03:58 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.430 09:03:58 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:40.430 09:03:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.430 [2024-10-15 09:03:58.213027] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:04:40.430 [2024-10-15 09:03:58.213148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57707 ] 00:04:40.688 [2024-10-15 09:03:58.381588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.689 [2024-10-15 09:03:58.504450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.625 09:03:59 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:41.625 09:03:59 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:41.625 09:03:59 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:41.885 09:03:59 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57707 00:04:41.885 09:03:59 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 57707 ']' 00:04:41.885 09:03:59 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 57707 00:04:41.885 09:03:59 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:41.885 09:03:59 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:41.885 09:03:59 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57707 00:04:41.885 09:03:59 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:41.885 09:03:59 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:41.885 killing process with pid 57707 00:04:41.885 09:03:59 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57707' 00:04:41.885 09:03:59 alias_rpc -- common/autotest_common.sh@969 -- # kill 57707 00:04:41.885 09:03:59 alias_rpc -- common/autotest_common.sh@974 -- # wait 57707 00:04:44.417 00:04:44.417 real 0m4.434s 00:04:44.417 user 0m4.397s 00:04:44.417 sys 0m0.643s 00:04:44.417 09:04:02 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.417 09:04:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.417 ************************************ 00:04:44.417 END TEST alias_rpc 00:04:44.417 ************************************ 00:04:44.676 09:04:02 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:44.676 09:04:02 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:44.676 09:04:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.676 09:04:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.676 09:04:02 -- common/autotest_common.sh@10 -- # set +x 00:04:44.676 ************************************ 00:04:44.676 START TEST spdkcli_tcp 00:04:44.676 ************************************ 00:04:44.676 09:04:02 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:44.676 * Looking for test storage... 00:04:44.676 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:44.676 09:04:02 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:44.676 09:04:02 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:44.676 09:04:02 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:44.676 09:04:02 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:44.676 09:04:02 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.935 09:04:02 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.935 09:04:02 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.935 09:04:02 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.935 09:04:02 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.935 09:04:02 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.935 09:04:02 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.935 09:04:02 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.935 09:04:02 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.935 09:04:02 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.935 09:04:02 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.935 09:04:02 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:44.935 09:04:02 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:44.935 09:04:02 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.935 09:04:02 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.935 09:04:02 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:44.935 09:04:02 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:44.935 09:04:02 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.935 09:04:02 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:44.935 09:04:02 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.935 09:04:02 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:44.935 09:04:02 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:44.935 09:04:02 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.935 09:04:02 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:44.935 09:04:02 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.935 09:04:02 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.936 09:04:02 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.936 09:04:02 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:44.936 09:04:02 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.936 09:04:02 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:44.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.936 --rc genhtml_branch_coverage=1 00:04:44.936 --rc genhtml_function_coverage=1 00:04:44.936 --rc genhtml_legend=1 00:04:44.936 --rc geninfo_all_blocks=1 00:04:44.936 --rc geninfo_unexecuted_blocks=1 00:04:44.936 00:04:44.936 ' 00:04:44.936 09:04:02 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:44.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.936 --rc genhtml_branch_coverage=1 00:04:44.936 --rc genhtml_function_coverage=1 00:04:44.936 --rc genhtml_legend=1 00:04:44.936 --rc geninfo_all_blocks=1 00:04:44.936 --rc geninfo_unexecuted_blocks=1 00:04:44.936 00:04:44.936 ' 00:04:44.936 09:04:02 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:44.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.936 --rc genhtml_branch_coverage=1 00:04:44.936 --rc genhtml_function_coverage=1 00:04:44.936 --rc genhtml_legend=1 00:04:44.936 --rc geninfo_all_blocks=1 00:04:44.936 --rc geninfo_unexecuted_blocks=1 00:04:44.936 00:04:44.936 ' 00:04:44.936 09:04:02 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:44.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.936 --rc genhtml_branch_coverage=1 00:04:44.936 --rc genhtml_function_coverage=1 00:04:44.936 --rc genhtml_legend=1 00:04:44.936 --rc geninfo_all_blocks=1 00:04:44.936 --rc geninfo_unexecuted_blocks=1 00:04:44.936 00:04:44.936 ' 00:04:44.936 09:04:02 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:44.936 09:04:02 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:44.936 09:04:02 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:44.936 09:04:02 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:44.936 09:04:02 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:44.936 09:04:02 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:44.936 09:04:02 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:44.936 09:04:02 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:44.936 09:04:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:44.936 09:04:02 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57819 00:04:44.936 09:04:02 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:44.936 09:04:02 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57819 00:04:44.936 09:04:02 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 57819 ']' 00:04:44.936 09:04:02 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.936 09:04:02 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:44.936 09:04:02 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.936 09:04:02 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:44.936 09:04:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:44.936 [2024-10-15 09:04:02.717328] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:04:44.936 [2024-10-15 09:04:02.717469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57819 ] 00:04:45.198 [2024-10-15 09:04:02.889600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:45.198 [2024-10-15 09:04:03.020944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.198 [2024-10-15 09:04:03.020980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.146 09:04:03 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:46.146 09:04:03 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:46.146 09:04:03 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57842 00:04:46.146 09:04:03 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:46.146 09:04:03 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:46.406 [ 00:04:46.406 "bdev_malloc_delete", 00:04:46.406 "bdev_malloc_create", 00:04:46.406 "bdev_null_resize", 00:04:46.406 "bdev_null_delete", 00:04:46.406 "bdev_null_create", 00:04:46.406 "bdev_nvme_cuse_unregister", 00:04:46.406 "bdev_nvme_cuse_register", 00:04:46.406 "bdev_opal_new_user", 00:04:46.406 "bdev_opal_set_lock_state", 00:04:46.406 "bdev_opal_delete", 00:04:46.406 "bdev_opal_get_info", 00:04:46.406 "bdev_opal_create", 00:04:46.406 "bdev_nvme_opal_revert", 00:04:46.406 "bdev_nvme_opal_init", 00:04:46.406 "bdev_nvme_send_cmd", 00:04:46.406 "bdev_nvme_set_keys", 00:04:46.406 "bdev_nvme_get_path_iostat", 00:04:46.406 "bdev_nvme_get_mdns_discovery_info", 00:04:46.406 "bdev_nvme_stop_mdns_discovery", 00:04:46.406 "bdev_nvme_start_mdns_discovery", 00:04:46.406 "bdev_nvme_set_multipath_policy", 00:04:46.406 "bdev_nvme_set_preferred_path", 00:04:46.406 "bdev_nvme_get_io_paths", 00:04:46.406 "bdev_nvme_remove_error_injection", 00:04:46.406 "bdev_nvme_add_error_injection", 00:04:46.406 "bdev_nvme_get_discovery_info", 00:04:46.406 "bdev_nvme_stop_discovery", 00:04:46.406 "bdev_nvme_start_discovery", 00:04:46.406 "bdev_nvme_get_controller_health_info", 00:04:46.406 "bdev_nvme_disable_controller", 00:04:46.406 "bdev_nvme_enable_controller", 00:04:46.406 "bdev_nvme_reset_controller", 00:04:46.406 "bdev_nvme_get_transport_statistics", 00:04:46.406 "bdev_nvme_apply_firmware", 00:04:46.406 "bdev_nvme_detach_controller", 00:04:46.406 "bdev_nvme_get_controllers", 00:04:46.406 "bdev_nvme_attach_controller", 00:04:46.406 "bdev_nvme_set_hotplug", 00:04:46.406 "bdev_nvme_set_options", 00:04:46.406 "bdev_passthru_delete", 00:04:46.406 "bdev_passthru_create", 00:04:46.406 "bdev_lvol_set_parent_bdev", 00:04:46.406 "bdev_lvol_set_parent", 00:04:46.406 "bdev_lvol_check_shallow_copy", 00:04:46.406 "bdev_lvol_start_shallow_copy", 00:04:46.406 "bdev_lvol_grow_lvstore", 00:04:46.406 "bdev_lvol_get_lvols", 00:04:46.406 "bdev_lvol_get_lvstores", 00:04:46.406 "bdev_lvol_delete", 00:04:46.406 "bdev_lvol_set_read_only", 00:04:46.406 "bdev_lvol_resize", 00:04:46.406 "bdev_lvol_decouple_parent", 00:04:46.406 "bdev_lvol_inflate", 00:04:46.406 "bdev_lvol_rename", 00:04:46.406 "bdev_lvol_clone_bdev", 00:04:46.406 "bdev_lvol_clone", 00:04:46.406 "bdev_lvol_snapshot", 00:04:46.406 "bdev_lvol_create", 00:04:46.406 "bdev_lvol_delete_lvstore", 00:04:46.406 "bdev_lvol_rename_lvstore", 00:04:46.406 "bdev_lvol_create_lvstore", 00:04:46.406 "bdev_raid_set_options", 00:04:46.406 "bdev_raid_remove_base_bdev", 00:04:46.406 "bdev_raid_add_base_bdev", 00:04:46.406 "bdev_raid_delete", 00:04:46.406 "bdev_raid_create", 00:04:46.406 "bdev_raid_get_bdevs", 00:04:46.406 "bdev_error_inject_error", 00:04:46.406 "bdev_error_delete", 00:04:46.406 "bdev_error_create", 00:04:46.406 "bdev_split_delete", 00:04:46.406 "bdev_split_create", 00:04:46.406 "bdev_delay_delete", 00:04:46.406 "bdev_delay_create", 00:04:46.406 "bdev_delay_update_latency", 00:04:46.406 "bdev_zone_block_delete", 00:04:46.406 "bdev_zone_block_create", 00:04:46.406 "blobfs_create", 00:04:46.406 "blobfs_detect", 00:04:46.406 "blobfs_set_cache_size", 00:04:46.406 "bdev_aio_delete", 00:04:46.406 "bdev_aio_rescan", 00:04:46.406 "bdev_aio_create", 00:04:46.406 "bdev_ftl_set_property", 00:04:46.406 "bdev_ftl_get_properties", 00:04:46.406 "bdev_ftl_get_stats", 00:04:46.406 "bdev_ftl_unmap", 00:04:46.406 "bdev_ftl_unload", 00:04:46.406 "bdev_ftl_delete", 00:04:46.406 "bdev_ftl_load", 00:04:46.406 "bdev_ftl_create", 00:04:46.406 "bdev_virtio_attach_controller", 00:04:46.406 "bdev_virtio_scsi_get_devices", 00:04:46.406 "bdev_virtio_detach_controller", 00:04:46.406 "bdev_virtio_blk_set_hotplug", 00:04:46.406 "bdev_iscsi_delete", 00:04:46.406 "bdev_iscsi_create", 00:04:46.406 "bdev_iscsi_set_options", 00:04:46.406 "accel_error_inject_error", 00:04:46.406 "ioat_scan_accel_module", 00:04:46.406 "dsa_scan_accel_module", 00:04:46.406 "iaa_scan_accel_module", 00:04:46.406 "keyring_file_remove_key", 00:04:46.406 "keyring_file_add_key", 00:04:46.406 "keyring_linux_set_options", 00:04:46.406 "fsdev_aio_delete", 00:04:46.406 "fsdev_aio_create", 00:04:46.406 "iscsi_get_histogram", 00:04:46.406 "iscsi_enable_histogram", 00:04:46.406 "iscsi_set_options", 00:04:46.406 "iscsi_get_auth_groups", 00:04:46.406 "iscsi_auth_group_remove_secret", 00:04:46.406 "iscsi_auth_group_add_secret", 00:04:46.406 "iscsi_delete_auth_group", 00:04:46.406 "iscsi_create_auth_group", 00:04:46.406 "iscsi_set_discovery_auth", 00:04:46.406 "iscsi_get_options", 00:04:46.406 "iscsi_target_node_request_logout", 00:04:46.406 "iscsi_target_node_set_redirect", 00:04:46.406 "iscsi_target_node_set_auth", 00:04:46.406 "iscsi_target_node_add_lun", 00:04:46.406 "iscsi_get_stats", 00:04:46.406 "iscsi_get_connections", 00:04:46.406 "iscsi_portal_group_set_auth", 00:04:46.406 "iscsi_start_portal_group", 00:04:46.406 "iscsi_delete_portal_group", 00:04:46.406 "iscsi_create_portal_group", 00:04:46.406 "iscsi_get_portal_groups", 00:04:46.406 "iscsi_delete_target_node", 00:04:46.406 "iscsi_target_node_remove_pg_ig_maps", 00:04:46.406 "iscsi_target_node_add_pg_ig_maps", 00:04:46.406 "iscsi_create_target_node", 00:04:46.406 "iscsi_get_target_nodes", 00:04:46.406 "iscsi_delete_initiator_group", 00:04:46.406 "iscsi_initiator_group_remove_initiators", 00:04:46.406 "iscsi_initiator_group_add_initiators", 00:04:46.406 "iscsi_create_initiator_group", 00:04:46.406 "iscsi_get_initiator_groups", 00:04:46.406 "nvmf_set_crdt", 00:04:46.406 "nvmf_set_config", 00:04:46.406 "nvmf_set_max_subsystems", 00:04:46.406 "nvmf_stop_mdns_prr", 00:04:46.406 "nvmf_publish_mdns_prr", 00:04:46.406 "nvmf_subsystem_get_listeners", 00:04:46.406 "nvmf_subsystem_get_qpairs", 00:04:46.406 "nvmf_subsystem_get_controllers", 00:04:46.406 "nvmf_get_stats", 00:04:46.406 "nvmf_get_transports", 00:04:46.406 "nvmf_create_transport", 00:04:46.406 "nvmf_get_targets", 00:04:46.406 "nvmf_delete_target", 00:04:46.406 "nvmf_create_target", 00:04:46.406 "nvmf_subsystem_allow_any_host", 00:04:46.406 "nvmf_subsystem_set_keys", 00:04:46.406 "nvmf_subsystem_remove_host", 00:04:46.406 "nvmf_subsystem_add_host", 00:04:46.406 "nvmf_ns_remove_host", 00:04:46.406 "nvmf_ns_add_host", 00:04:46.406 "nvmf_subsystem_remove_ns", 00:04:46.406 "nvmf_subsystem_set_ns_ana_group", 00:04:46.406 "nvmf_subsystem_add_ns", 00:04:46.406 "nvmf_subsystem_listener_set_ana_state", 00:04:46.406 "nvmf_discovery_get_referrals", 00:04:46.406 "nvmf_discovery_remove_referral", 00:04:46.406 "nvmf_discovery_add_referral", 00:04:46.406 "nvmf_subsystem_remove_listener", 00:04:46.406 "nvmf_subsystem_add_listener", 00:04:46.406 "nvmf_delete_subsystem", 00:04:46.406 "nvmf_create_subsystem", 00:04:46.406 "nvmf_get_subsystems", 00:04:46.406 "env_dpdk_get_mem_stats", 00:04:46.406 "nbd_get_disks", 00:04:46.406 "nbd_stop_disk", 00:04:46.406 "nbd_start_disk", 00:04:46.406 "ublk_recover_disk", 00:04:46.406 "ublk_get_disks", 00:04:46.406 "ublk_stop_disk", 00:04:46.406 "ublk_start_disk", 00:04:46.406 "ublk_destroy_target", 00:04:46.406 "ublk_create_target", 00:04:46.406 "virtio_blk_create_transport", 00:04:46.406 "virtio_blk_get_transports", 00:04:46.406 "vhost_controller_set_coalescing", 00:04:46.406 "vhost_get_controllers", 00:04:46.406 "vhost_delete_controller", 00:04:46.406 "vhost_create_blk_controller", 00:04:46.406 "vhost_scsi_controller_remove_target", 00:04:46.406 "vhost_scsi_controller_add_target", 00:04:46.406 "vhost_start_scsi_controller", 00:04:46.406 "vhost_create_scsi_controller", 00:04:46.406 "thread_set_cpumask", 00:04:46.406 "scheduler_set_options", 00:04:46.406 "framework_get_governor", 00:04:46.406 "framework_get_scheduler", 00:04:46.406 "framework_set_scheduler", 00:04:46.406 "framework_get_reactors", 00:04:46.406 "thread_get_io_channels", 00:04:46.406 "thread_get_pollers", 00:04:46.406 "thread_get_stats", 00:04:46.406 "framework_monitor_context_switch", 00:04:46.406 "spdk_kill_instance", 00:04:46.406 "log_enable_timestamps", 00:04:46.406 "log_get_flags", 00:04:46.406 "log_clear_flag", 00:04:46.406 "log_set_flag", 00:04:46.406 "log_get_level", 00:04:46.406 "log_set_level", 00:04:46.406 "log_get_print_level", 00:04:46.406 "log_set_print_level", 00:04:46.406 "framework_enable_cpumask_locks", 00:04:46.406 "framework_disable_cpumask_locks", 00:04:46.406 "framework_wait_init", 00:04:46.406 "framework_start_init", 00:04:46.406 "scsi_get_devices", 00:04:46.406 "bdev_get_histogram", 00:04:46.406 "bdev_enable_histogram", 00:04:46.406 "bdev_set_qos_limit", 00:04:46.406 "bdev_set_qd_sampling_period", 00:04:46.406 "bdev_get_bdevs", 00:04:46.406 "bdev_reset_iostat", 00:04:46.406 "bdev_get_iostat", 00:04:46.407 "bdev_examine", 00:04:46.407 "bdev_wait_for_examine", 00:04:46.407 "bdev_set_options", 00:04:46.407 "accel_get_stats", 00:04:46.407 "accel_set_options", 00:04:46.407 "accel_set_driver", 00:04:46.407 "accel_crypto_key_destroy", 00:04:46.407 "accel_crypto_keys_get", 00:04:46.407 "accel_crypto_key_create", 00:04:46.407 "accel_assign_opc", 00:04:46.407 "accel_get_module_info", 00:04:46.407 "accel_get_opc_assignments", 00:04:46.407 "vmd_rescan", 00:04:46.407 "vmd_remove_device", 00:04:46.407 "vmd_enable", 00:04:46.407 "sock_get_default_impl", 00:04:46.407 "sock_set_default_impl", 00:04:46.407 "sock_impl_set_options", 00:04:46.407 "sock_impl_get_options", 00:04:46.407 "iobuf_get_stats", 00:04:46.407 "iobuf_set_options", 00:04:46.407 "keyring_get_keys", 00:04:46.407 "framework_get_pci_devices", 00:04:46.407 "framework_get_config", 00:04:46.407 "framework_get_subsystems", 00:04:46.407 "fsdev_set_opts", 00:04:46.407 "fsdev_get_opts", 00:04:46.407 "trace_get_info", 00:04:46.407 "trace_get_tpoint_group_mask", 00:04:46.407 "trace_disable_tpoint_group", 00:04:46.407 "trace_enable_tpoint_group", 00:04:46.407 "trace_clear_tpoint_mask", 00:04:46.407 "trace_set_tpoint_mask", 00:04:46.407 "notify_get_notifications", 00:04:46.407 "notify_get_types", 00:04:46.407 "spdk_get_version", 00:04:46.407 "rpc_get_methods" 00:04:46.407 ] 00:04:46.407 09:04:04 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:46.407 09:04:04 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:46.407 09:04:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:46.407 09:04:04 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:46.407 09:04:04 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57819 00:04:46.407 09:04:04 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 57819 ']' 00:04:46.407 09:04:04 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 57819 00:04:46.407 09:04:04 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:46.407 09:04:04 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:46.407 09:04:04 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57819 00:04:46.666 09:04:04 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:46.666 09:04:04 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:46.666 killing process with pid 57819 00:04:46.666 09:04:04 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57819' 00:04:46.666 09:04:04 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 57819 00:04:46.666 09:04:04 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 57819 00:04:49.203 00:04:49.203 real 0m4.501s 00:04:49.203 user 0m8.004s 00:04:49.203 sys 0m0.692s 00:04:49.203 09:04:06 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.203 09:04:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:49.203 ************************************ 00:04:49.203 END TEST spdkcli_tcp 00:04:49.203 ************************************ 00:04:49.203 09:04:06 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:49.203 09:04:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.203 09:04:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.203 09:04:06 -- common/autotest_common.sh@10 -- # set +x 00:04:49.203 ************************************ 00:04:49.203 START TEST dpdk_mem_utility 00:04:49.203 ************************************ 00:04:49.203 09:04:06 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:49.203 * Looking for test storage... 00:04:49.203 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:49.203 09:04:07 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:49.203 09:04:07 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:49.203 09:04:07 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:49.463 09:04:07 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:49.463 09:04:07 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.463 09:04:07 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.463 09:04:07 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.463 09:04:07 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.463 09:04:07 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.463 09:04:07 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.463 09:04:07 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.464 09:04:07 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.464 09:04:07 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.464 09:04:07 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.464 09:04:07 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.464 09:04:07 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:49.464 09:04:07 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:49.464 09:04:07 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.464 09:04:07 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.464 09:04:07 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:49.464 09:04:07 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:49.464 09:04:07 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.464 09:04:07 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:49.464 09:04:07 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.464 09:04:07 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:49.464 09:04:07 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:49.464 09:04:07 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.464 09:04:07 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:49.464 09:04:07 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.464 09:04:07 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.464 09:04:07 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.464 09:04:07 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:49.464 09:04:07 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.464 09:04:07 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:49.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.464 --rc genhtml_branch_coverage=1 00:04:49.464 --rc genhtml_function_coverage=1 00:04:49.464 --rc genhtml_legend=1 00:04:49.464 --rc geninfo_all_blocks=1 00:04:49.464 --rc geninfo_unexecuted_blocks=1 00:04:49.464 00:04:49.464 ' 00:04:49.464 09:04:07 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:49.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.464 --rc genhtml_branch_coverage=1 00:04:49.464 --rc genhtml_function_coverage=1 00:04:49.464 --rc genhtml_legend=1 00:04:49.464 --rc geninfo_all_blocks=1 00:04:49.464 --rc geninfo_unexecuted_blocks=1 00:04:49.464 00:04:49.464 ' 00:04:49.464 09:04:07 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:49.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.464 --rc genhtml_branch_coverage=1 00:04:49.464 --rc genhtml_function_coverage=1 00:04:49.464 --rc genhtml_legend=1 00:04:49.464 --rc geninfo_all_blocks=1 00:04:49.464 --rc geninfo_unexecuted_blocks=1 00:04:49.464 00:04:49.464 ' 00:04:49.464 09:04:07 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:49.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.464 --rc genhtml_branch_coverage=1 00:04:49.464 --rc genhtml_function_coverage=1 00:04:49.464 --rc genhtml_legend=1 00:04:49.464 --rc geninfo_all_blocks=1 00:04:49.464 --rc geninfo_unexecuted_blocks=1 00:04:49.464 00:04:49.464 ' 00:04:49.464 09:04:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:49.464 09:04:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57947 00:04:49.464 09:04:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:49.464 09:04:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57947 00:04:49.464 09:04:07 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 57947 ']' 00:04:49.464 09:04:07 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.464 09:04:07 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:49.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.464 09:04:07 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.464 09:04:07 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:49.464 09:04:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:49.464 [2024-10-15 09:04:07.279131] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:04:49.464 [2024-10-15 09:04:07.279282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57947 ] 00:04:49.724 [2024-10-15 09:04:07.451356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.724 [2024-10-15 09:04:07.572444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.660 09:04:08 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:50.660 09:04:08 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:50.660 09:04:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:50.660 09:04:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:50.660 09:04:08 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.660 09:04:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:50.660 { 00:04:50.661 "filename": "/tmp/spdk_mem_dump.txt" 00:04:50.661 } 00:04:50.661 09:04:08 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.661 09:04:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:50.661 DPDK memory size 816.000000 MiB in 1 heap(s) 00:04:50.661 1 heaps totaling size 816.000000 MiB 00:04:50.661 size: 816.000000 MiB heap id: 0 00:04:50.661 end heaps---------- 00:04:50.661 9 mempools totaling size 595.772034 MiB 00:04:50.661 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:50.661 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:50.661 size: 92.545471 MiB name: bdev_io_57947 00:04:50.661 size: 50.003479 MiB name: msgpool_57947 00:04:50.661 size: 36.509338 MiB name: fsdev_io_57947 00:04:50.661 size: 21.763794 MiB name: PDU_Pool 00:04:50.661 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:50.661 size: 4.133484 MiB name: evtpool_57947 00:04:50.661 size: 0.026123 MiB name: Session_Pool 00:04:50.661 end mempools------- 00:04:50.661 6 memzones totaling size 4.142822 MiB 00:04:50.661 size: 1.000366 MiB name: RG_ring_0_57947 00:04:50.661 size: 1.000366 MiB name: RG_ring_1_57947 00:04:50.661 size: 1.000366 MiB name: RG_ring_4_57947 00:04:50.661 size: 1.000366 MiB name: RG_ring_5_57947 00:04:50.661 size: 0.125366 MiB name: RG_ring_2_57947 00:04:50.661 size: 0.015991 MiB name: RG_ring_3_57947 00:04:50.661 end memzones------- 00:04:50.661 09:04:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:50.922 heap id: 0 total size: 816.000000 MiB number of busy elements: 320 number of free elements: 18 00:04:50.922 list of free elements. size: 16.790161 MiB 00:04:50.922 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:50.922 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:50.922 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:50.922 element at address: 0x200018d00040 with size: 0.999939 MiB 00:04:50.922 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:50.922 element at address: 0x200019200000 with size: 0.999084 MiB 00:04:50.922 element at address: 0x200031e00000 with size: 0.994324 MiB 00:04:50.922 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:50.922 element at address: 0x200018a00000 with size: 0.959656 MiB 00:04:50.922 element at address: 0x200019500040 with size: 0.936401 MiB 00:04:50.922 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:50.922 element at address: 0x20001ac00000 with size: 0.560486 MiB 00:04:50.922 element at address: 0x200000c00000 with size: 0.490173 MiB 00:04:50.922 element at address: 0x200018e00000 with size: 0.487976 MiB 00:04:50.922 element at address: 0x200019600000 with size: 0.485413 MiB 00:04:50.922 element at address: 0x200012c00000 with size: 0.443481 MiB 00:04:50.922 element at address: 0x200028000000 with size: 0.390442 MiB 00:04:50.922 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:50.922 list of standard malloc elements. size: 199.288940 MiB 00:04:50.922 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:50.922 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:50.922 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:04:50.922 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:50.922 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:50.922 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:50.922 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:04:50.922 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:50.922 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:50.922 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:04:50.922 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:50.922 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:50.922 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:50.922 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:50.922 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:50.922 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:50.922 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:50.922 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:50.922 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:50.922 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:50.922 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:50.922 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:50.922 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:50.922 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:50.922 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:50.922 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:50.922 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:50.922 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:50.922 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:50.922 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:50.922 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:50.922 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:50.922 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:50.922 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:50.922 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:50.922 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:50.922 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:50.922 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:50.922 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:50.922 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:50.922 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:50.922 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:50.922 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:50.922 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:50.922 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:50.922 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:50.922 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:50.922 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:50.922 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:50.922 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:50.922 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:50.922 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:50.922 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:50.922 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:50.922 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:50.922 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:50.922 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:50.922 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:50.922 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:50.922 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:50.922 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:50.922 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:50.923 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200012c71880 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200012c71980 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200012c72080 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200012c72180 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:50.923 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:04:50.923 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:04:50.923 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:04:50.924 element at address: 0x200028063f40 with size: 0.000244 MiB 00:04:50.924 element at address: 0x200028064040 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806af80 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806b080 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806b180 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806b280 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806b380 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806b480 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806b580 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806b680 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806b780 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806b880 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806b980 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806be80 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806c080 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806c180 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806c280 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806c380 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806c480 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806c580 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806c680 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806c780 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806c880 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806c980 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806d080 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806d180 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806d280 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806d380 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806d480 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806d580 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806d680 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806d780 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806d880 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806d980 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806da80 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806db80 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806de80 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806df80 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806e080 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806e180 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806e280 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806e380 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806e480 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806e580 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806e680 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806e780 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806e880 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806e980 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806f080 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806f180 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806f280 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806f380 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806f480 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806f580 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806f680 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806f780 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806f880 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806f980 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:04:50.924 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:04:50.924 list of memzone associated elements. size: 599.920898 MiB 00:04:50.924 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:04:50.924 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:50.924 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:04:50.924 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:50.924 element at address: 0x200012df4740 with size: 92.045105 MiB 00:04:50.924 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57947_0 00:04:50.924 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:50.924 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57947_0 00:04:50.924 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:50.924 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57947_0 00:04:50.924 element at address: 0x2000197be900 with size: 20.255615 MiB 00:04:50.924 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:50.924 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:04:50.924 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:50.924 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:50.924 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57947_0 00:04:50.924 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:50.924 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57947 00:04:50.924 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:50.924 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57947 00:04:50.924 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:50.924 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:50.924 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:04:50.924 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:50.924 element at address: 0x200018afde00 with size: 1.008179 MiB 00:04:50.924 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:50.924 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:04:50.924 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:50.924 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:50.924 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57947 00:04:50.924 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:50.924 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57947 00:04:50.924 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:04:50.924 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57947 00:04:50.924 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:04:50.925 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57947 00:04:50.925 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:50.925 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57947 00:04:50.925 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:50.925 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57947 00:04:50.925 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:04:50.925 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:50.925 element at address: 0x200012c72280 with size: 0.500549 MiB 00:04:50.925 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:50.925 element at address: 0x20001967c440 with size: 0.250549 MiB 00:04:50.925 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:50.925 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:50.925 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57947 00:04:50.925 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:50.925 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57947 00:04:50.925 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:04:50.925 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:50.925 element at address: 0x200028064140 with size: 0.023804 MiB 00:04:50.925 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:50.925 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:50.925 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57947 00:04:50.925 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:04:50.925 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:50.925 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:50.925 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57947 00:04:50.925 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:50.925 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57947 00:04:50.925 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:50.925 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57947 00:04:50.925 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:04:50.925 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:50.925 09:04:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:50.925 09:04:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57947 00:04:50.925 09:04:08 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 57947 ']' 00:04:50.925 09:04:08 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 57947 00:04:50.925 09:04:08 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:50.925 09:04:08 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:50.925 09:04:08 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57947 00:04:50.925 09:04:08 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:50.925 09:04:08 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:50.925 09:04:08 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57947' 00:04:50.925 killing process with pid 57947 00:04:50.925 09:04:08 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 57947 00:04:50.925 09:04:08 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 57947 00:04:53.465 00:04:53.465 real 0m4.182s 00:04:53.465 user 0m4.075s 00:04:53.465 sys 0m0.605s 00:04:53.465 09:04:11 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.465 ************************************ 00:04:53.465 END TEST dpdk_mem_utility 00:04:53.465 09:04:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:53.465 ************************************ 00:04:53.465 09:04:11 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:53.465 09:04:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.465 09:04:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.465 09:04:11 -- common/autotest_common.sh@10 -- # set +x 00:04:53.465 ************************************ 00:04:53.465 START TEST event 00:04:53.465 ************************************ 00:04:53.465 09:04:11 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:53.465 * Looking for test storage... 00:04:53.465 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:53.465 09:04:11 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:53.465 09:04:11 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:53.465 09:04:11 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:53.731 09:04:11 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:53.731 09:04:11 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.731 09:04:11 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.731 09:04:11 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.731 09:04:11 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.731 09:04:11 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.731 09:04:11 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.731 09:04:11 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.731 09:04:11 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.731 09:04:11 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.731 09:04:11 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.731 09:04:11 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.731 09:04:11 event -- scripts/common.sh@344 -- # case "$op" in 00:04:53.731 09:04:11 event -- scripts/common.sh@345 -- # : 1 00:04:53.731 09:04:11 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.731 09:04:11 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.731 09:04:11 event -- scripts/common.sh@365 -- # decimal 1 00:04:53.731 09:04:11 event -- scripts/common.sh@353 -- # local d=1 00:04:53.731 09:04:11 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.731 09:04:11 event -- scripts/common.sh@355 -- # echo 1 00:04:53.731 09:04:11 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.731 09:04:11 event -- scripts/common.sh@366 -- # decimal 2 00:04:53.731 09:04:11 event -- scripts/common.sh@353 -- # local d=2 00:04:53.731 09:04:11 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.731 09:04:11 event -- scripts/common.sh@355 -- # echo 2 00:04:53.731 09:04:11 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.731 09:04:11 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.731 09:04:11 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.731 09:04:11 event -- scripts/common.sh@368 -- # return 0 00:04:53.731 09:04:11 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.731 09:04:11 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:53.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.731 --rc genhtml_branch_coverage=1 00:04:53.731 --rc genhtml_function_coverage=1 00:04:53.731 --rc genhtml_legend=1 00:04:53.731 --rc geninfo_all_blocks=1 00:04:53.731 --rc geninfo_unexecuted_blocks=1 00:04:53.731 00:04:53.731 ' 00:04:53.731 09:04:11 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:53.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.731 --rc genhtml_branch_coverage=1 00:04:53.731 --rc genhtml_function_coverage=1 00:04:53.731 --rc genhtml_legend=1 00:04:53.731 --rc geninfo_all_blocks=1 00:04:53.731 --rc geninfo_unexecuted_blocks=1 00:04:53.731 00:04:53.731 ' 00:04:53.731 09:04:11 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:53.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.731 --rc genhtml_branch_coverage=1 00:04:53.731 --rc genhtml_function_coverage=1 00:04:53.731 --rc genhtml_legend=1 00:04:53.731 --rc geninfo_all_blocks=1 00:04:53.731 --rc geninfo_unexecuted_blocks=1 00:04:53.731 00:04:53.731 ' 00:04:53.731 09:04:11 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:53.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.731 --rc genhtml_branch_coverage=1 00:04:53.731 --rc genhtml_function_coverage=1 00:04:53.731 --rc genhtml_legend=1 00:04:53.731 --rc geninfo_all_blocks=1 00:04:53.731 --rc geninfo_unexecuted_blocks=1 00:04:53.731 00:04:53.731 ' 00:04:53.731 09:04:11 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:53.731 09:04:11 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:53.731 09:04:11 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:53.731 09:04:11 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:53.731 09:04:11 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.731 09:04:11 event -- common/autotest_common.sh@10 -- # set +x 00:04:53.731 ************************************ 00:04:53.731 START TEST event_perf 00:04:53.731 ************************************ 00:04:53.731 09:04:11 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:53.731 Running I/O for 1 seconds...[2024-10-15 09:04:11.479225] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:04:53.732 [2024-10-15 09:04:11.479357] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58055 ] 00:04:53.990 [2024-10-15 09:04:11.650746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:53.990 [2024-10-15 09:04:11.776887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.990 [2024-10-15 09:04:11.777068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:53.990 Running I/O for 1 seconds...[2024-10-15 09:04:11.777448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.990 [2024-10-15 09:04:11.777530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:55.369 00:04:55.369 lcore 0: 101968 00:04:55.369 lcore 1: 101967 00:04:55.369 lcore 2: 101967 00:04:55.369 lcore 3: 101967 00:04:55.369 done. 00:04:55.369 00:04:55.369 real 0m1.589s 00:04:55.369 user 0m4.346s 00:04:55.369 sys 0m0.121s 00:04:55.369 09:04:13 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.369 09:04:13 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:55.369 ************************************ 00:04:55.369 END TEST event_perf 00:04:55.369 ************************************ 00:04:55.369 09:04:13 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:55.369 09:04:13 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:55.369 09:04:13 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.369 09:04:13 event -- common/autotest_common.sh@10 -- # set +x 00:04:55.369 ************************************ 00:04:55.369 START TEST event_reactor 00:04:55.369 ************************************ 00:04:55.369 09:04:13 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:55.369 [2024-10-15 09:04:13.150840] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:04:55.369 [2024-10-15 09:04:13.150979] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58094 ] 00:04:55.628 [2024-10-15 09:04:13.320886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.628 [2024-10-15 09:04:13.447405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.006 test_start 00:04:57.006 oneshot 00:04:57.006 tick 100 00:04:57.006 tick 100 00:04:57.006 tick 250 00:04:57.006 tick 100 00:04:57.006 tick 100 00:04:57.006 tick 100 00:04:57.006 tick 250 00:04:57.006 tick 500 00:04:57.006 tick 100 00:04:57.006 tick 100 00:04:57.006 tick 250 00:04:57.006 tick 100 00:04:57.006 tick 100 00:04:57.006 test_end 00:04:57.006 00:04:57.006 real 0m1.599s 00:04:57.006 user 0m1.377s 00:04:57.006 sys 0m0.112s 00:04:57.006 09:04:14 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.006 09:04:14 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:57.006 ************************************ 00:04:57.006 END TEST event_reactor 00:04:57.006 ************************************ 00:04:57.006 09:04:14 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:57.006 09:04:14 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:57.006 09:04:14 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.006 09:04:14 event -- common/autotest_common.sh@10 -- # set +x 00:04:57.006 ************************************ 00:04:57.006 START TEST event_reactor_perf 00:04:57.006 ************************************ 00:04:57.006 09:04:14 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:57.006 [2024-10-15 09:04:14.814749] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:04:57.006 [2024-10-15 09:04:14.814878] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58131 ] 00:04:57.265 [2024-10-15 09:04:14.982856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.265 [2024-10-15 09:04:15.100039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.647 test_start 00:04:58.648 test_end 00:04:58.648 Performance: 380511 events per second 00:04:58.648 00:04:58.648 real 0m1.562s 00:04:58.648 user 0m1.350s 00:04:58.648 sys 0m0.105s 00:04:58.648 09:04:16 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.648 09:04:16 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:58.648 ************************************ 00:04:58.648 END TEST event_reactor_perf 00:04:58.648 ************************************ 00:04:58.648 09:04:16 event -- event/event.sh@49 -- # uname -s 00:04:58.648 09:04:16 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:58.648 09:04:16 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:58.648 09:04:16 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.648 09:04:16 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.648 09:04:16 event -- common/autotest_common.sh@10 -- # set +x 00:04:58.648 ************************************ 00:04:58.648 START TEST event_scheduler 00:04:58.648 ************************************ 00:04:58.648 09:04:16 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:58.648 * Looking for test storage... 00:04:58.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:58.648 09:04:16 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:58.648 09:04:16 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:04:58.648 09:04:16 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:58.907 09:04:16 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:58.907 09:04:16 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.907 09:04:16 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.907 09:04:16 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.907 09:04:16 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.907 09:04:16 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.907 09:04:16 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.907 09:04:16 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.907 09:04:16 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.907 09:04:16 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.907 09:04:16 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.907 09:04:16 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.907 09:04:16 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:58.907 09:04:16 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:58.907 09:04:16 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.907 09:04:16 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.907 09:04:16 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:58.907 09:04:16 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:58.907 09:04:16 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.907 09:04:16 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:58.907 09:04:16 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.907 09:04:16 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:58.907 09:04:16 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:58.907 09:04:16 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.907 09:04:16 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:58.907 09:04:16 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.907 09:04:16 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.907 09:04:16 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.907 09:04:16 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:58.907 09:04:16 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.907 09:04:16 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:58.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.907 --rc genhtml_branch_coverage=1 00:04:58.907 --rc genhtml_function_coverage=1 00:04:58.907 --rc genhtml_legend=1 00:04:58.907 --rc geninfo_all_blocks=1 00:04:58.907 --rc geninfo_unexecuted_blocks=1 00:04:58.907 00:04:58.907 ' 00:04:58.907 09:04:16 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:58.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.907 --rc genhtml_branch_coverage=1 00:04:58.907 --rc genhtml_function_coverage=1 00:04:58.907 --rc genhtml_legend=1 00:04:58.907 --rc geninfo_all_blocks=1 00:04:58.907 --rc geninfo_unexecuted_blocks=1 00:04:58.907 00:04:58.907 ' 00:04:58.907 09:04:16 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:58.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.907 --rc genhtml_branch_coverage=1 00:04:58.907 --rc genhtml_function_coverage=1 00:04:58.907 --rc genhtml_legend=1 00:04:58.907 --rc geninfo_all_blocks=1 00:04:58.907 --rc geninfo_unexecuted_blocks=1 00:04:58.907 00:04:58.907 ' 00:04:58.907 09:04:16 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:58.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.907 --rc genhtml_branch_coverage=1 00:04:58.907 --rc genhtml_function_coverage=1 00:04:58.907 --rc genhtml_legend=1 00:04:58.907 --rc geninfo_all_blocks=1 00:04:58.907 --rc geninfo_unexecuted_blocks=1 00:04:58.907 00:04:58.907 ' 00:04:58.907 09:04:16 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:58.907 09:04:16 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58207 00:04:58.907 09:04:16 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.907 09:04:16 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:58.907 09:04:16 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58207 00:04:58.907 09:04:16 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58207 ']' 00:04:58.907 09:04:16 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.907 09:04:16 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:58.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.907 09:04:16 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.907 09:04:16 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:58.907 09:04:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:58.907 [2024-10-15 09:04:16.666119] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:04:58.907 [2024-10-15 09:04:16.666239] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58207 ] 00:04:59.166 [2024-10-15 09:04:16.815525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:59.166 [2024-10-15 09:04:16.976083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.166 [2024-10-15 09:04:16.976215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.166 [2024-10-15 09:04:16.976660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:59.166 [2024-10-15 09:04:16.976797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:59.734 09:04:17 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:59.734 09:04:17 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:04:59.734 09:04:17 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:59.734 09:04:17 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.734 09:04:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:59.734 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:59.734 POWER: Cannot set governor of lcore 0 to userspace 00:04:59.734 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:59.734 POWER: Cannot set governor of lcore 0 to performance 00:04:59.734 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:59.734 POWER: Cannot set governor of lcore 0 to userspace 00:04:59.734 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:59.734 POWER: Cannot set governor of lcore 0 to userspace 00:04:59.734 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:59.734 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:59.734 POWER: Unable to set Power Management Environment for lcore 0 00:04:59.734 [2024-10-15 09:04:17.541384] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:59.734 [2024-10-15 09:04:17.541446] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:59.734 [2024-10-15 09:04:17.541458] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:59.734 [2024-10-15 09:04:17.541478] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:59.734 [2024-10-15 09:04:17.541490] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:59.734 [2024-10-15 09:04:17.541501] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:59.734 09:04:17 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.734 09:04:17 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:59.734 09:04:17 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.734 09:04:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:59.993 [2024-10-15 09:04:17.870598] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:59.993 09:04:17 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.993 09:04:17 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:59.993 09:04:17 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.993 09:04:17 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.993 09:04:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:59.993 ************************************ 00:04:59.993 START TEST scheduler_create_thread 00:04:59.993 ************************************ 00:04:59.993 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:04:59.993 09:04:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:59.993 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.993 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.251 2 00:05:00.251 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.251 09:04:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:00.251 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.251 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.251 3 00:05:00.251 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.251 09:04:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:00.251 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.251 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.251 4 00:05:00.251 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.251 09:04:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:00.251 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.251 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.251 5 00:05:00.251 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.251 09:04:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:00.251 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.251 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.251 6 00:05:00.251 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.251 09:04:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:00.251 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.251 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.251 7 00:05:00.251 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.251 09:04:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:00.251 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.251 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.251 8 00:05:00.251 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.251 09:04:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:00.251 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.251 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.251 9 00:05:00.251 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.251 09:04:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:00.251 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.252 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.252 10 00:05:00.252 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.252 09:04:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:00.252 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.252 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.252 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.252 09:04:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:00.252 09:04:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:00.252 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.252 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.252 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.252 09:04:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:00.252 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.252 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.252 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.252 09:04:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:00.252 09:04:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:00.252 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.252 09:04:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.186 09:04:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.186 00:05:01.186 real 0m1.174s 00:05:01.186 user 0m0.017s 00:05:01.186 sys 0m0.006s 00:05:01.186 ************************************ 00:05:01.186 END TEST scheduler_create_thread 00:05:01.186 ************************************ 00:05:01.186 09:04:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.186 09:04:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.445 09:04:19 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:01.445 09:04:19 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58207 00:05:01.445 09:04:19 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58207 ']' 00:05:01.445 09:04:19 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58207 00:05:01.445 09:04:19 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:01.445 09:04:19 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:01.445 09:04:19 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58207 00:05:01.445 09:04:19 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:01.445 09:04:19 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:01.445 killing process with pid 58207 00:05:01.445 09:04:19 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58207' 00:05:01.445 09:04:19 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58207 00:05:01.445 09:04:19 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58207 00:05:01.704 [2024-10-15 09:04:19.534835] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:03.081 00:05:03.081 real 0m4.358s 00:05:03.081 user 0m7.459s 00:05:03.081 sys 0m0.508s 00:05:03.081 09:04:20 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.081 09:04:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.081 ************************************ 00:05:03.081 END TEST event_scheduler 00:05:03.081 ************************************ 00:05:03.081 09:04:20 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:03.081 09:04:20 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:03.081 09:04:20 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:03.081 09:04:20 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.081 09:04:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.081 ************************************ 00:05:03.081 START TEST app_repeat 00:05:03.081 ************************************ 00:05:03.081 09:04:20 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:03.081 09:04:20 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.081 09:04:20 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.081 09:04:20 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:03.081 09:04:20 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.081 09:04:20 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:03.081 09:04:20 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:03.081 09:04:20 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:03.081 09:04:20 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58302 00:05:03.081 09:04:20 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:03.081 09:04:20 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.081 Process app_repeat pid: 58302 00:05:03.081 09:04:20 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58302' 00:05:03.081 09:04:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:03.081 spdk_app_start Round 0 00:05:03.081 09:04:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:03.081 09:04:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58302 /var/tmp/spdk-nbd.sock 00:05:03.081 09:04:20 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58302 ']' 00:05:03.081 09:04:20 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:03.081 09:04:20 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:03.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:03.081 09:04:20 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:03.081 09:04:20 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:03.081 09:04:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:03.081 [2024-10-15 09:04:20.899943] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:05:03.081 [2024-10-15 09:04:20.900062] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58302 ] 00:05:03.340 [2024-10-15 09:04:21.067676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:03.340 [2024-10-15 09:04:21.192490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.340 [2024-10-15 09:04:21.192518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.909 09:04:21 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:03.909 09:04:21 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:03.909 09:04:21 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.168 Malloc0 00:05:04.427 09:04:22 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.686 Malloc1 00:05:04.686 09:04:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.686 09:04:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.686 09:04:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.686 09:04:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:04.686 09:04:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.686 09:04:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:04.686 09:04:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.686 09:04:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.686 09:04:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.686 09:04:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:04.686 09:04:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.686 09:04:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:04.686 09:04:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:04.686 09:04:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:04.686 09:04:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.686 09:04:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:05.030 /dev/nbd0 00:05:05.030 09:04:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:05.030 09:04:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:05.030 09:04:22 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:05.030 09:04:22 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:05.030 09:04:22 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:05.030 09:04:22 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:05.030 09:04:22 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:05.030 09:04:22 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:05.030 09:04:22 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:05.030 09:04:22 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:05.030 09:04:22 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.030 1+0 records in 00:05:05.030 1+0 records out 00:05:05.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000478146 s, 8.6 MB/s 00:05:05.030 09:04:22 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.030 09:04:22 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:05.030 09:04:22 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.030 09:04:22 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:05.030 09:04:22 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:05.030 09:04:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.030 09:04:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.030 09:04:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:05.030 /dev/nbd1 00:05:05.030 09:04:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:05.030 09:04:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:05.030 09:04:22 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:05.030 09:04:22 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:05.030 09:04:22 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:05.030 09:04:22 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:05.030 09:04:22 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:05.030 09:04:22 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:05.030 09:04:22 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:05.030 09:04:22 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:05.030 09:04:22 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.030 1+0 records in 00:05:05.030 1+0 records out 00:05:05.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367528 s, 11.1 MB/s 00:05:05.030 09:04:22 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.030 09:04:22 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:05.030 09:04:22 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.030 09:04:22 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:05.030 09:04:22 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:05.030 09:04:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.030 09:04:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.030 09:04:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.030 09:04:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.030 09:04:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.293 09:04:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:05.293 { 00:05:05.293 "nbd_device": "/dev/nbd0", 00:05:05.293 "bdev_name": "Malloc0" 00:05:05.293 }, 00:05:05.293 { 00:05:05.293 "nbd_device": "/dev/nbd1", 00:05:05.293 "bdev_name": "Malloc1" 00:05:05.293 } 00:05:05.293 ]' 00:05:05.293 09:04:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:05.293 { 00:05:05.293 "nbd_device": "/dev/nbd0", 00:05:05.293 "bdev_name": "Malloc0" 00:05:05.293 }, 00:05:05.293 { 00:05:05.293 "nbd_device": "/dev/nbd1", 00:05:05.293 "bdev_name": "Malloc1" 00:05:05.293 } 00:05:05.293 ]' 00:05:05.293 09:04:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.551 09:04:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:05.551 /dev/nbd1' 00:05:05.551 09:04:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:05.551 /dev/nbd1' 00:05:05.551 09:04:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.551 09:04:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:05.551 09:04:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:05.551 09:04:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:05.551 09:04:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:05.551 09:04:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:05.551 09:04:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.551 09:04:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.551 09:04:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:05.551 09:04:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:05.551 09:04:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:05.551 09:04:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:05.551 256+0 records in 00:05:05.551 256+0 records out 00:05:05.551 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127297 s, 82.4 MB/s 00:05:05.551 09:04:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.551 09:04:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:05.551 256+0 records in 00:05:05.551 256+0 records out 00:05:05.551 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258645 s, 40.5 MB/s 00:05:05.551 09:04:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.551 09:04:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:05.551 256+0 records in 00:05:05.552 256+0 records out 00:05:05.552 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0303323 s, 34.6 MB/s 00:05:05.552 09:04:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:05.552 09:04:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.552 09:04:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.552 09:04:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:05.552 09:04:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:05.552 09:04:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:05.552 09:04:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:05.552 09:04:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.552 09:04:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:05.552 09:04:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.552 09:04:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:05.552 09:04:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:05.552 09:04:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:05.552 09:04:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.552 09:04:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.552 09:04:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:05.552 09:04:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:05.552 09:04:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.552 09:04:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:05.810 09:04:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:05.810 09:04:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:05.810 09:04:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:05.810 09:04:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.810 09:04:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.810 09:04:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:05.810 09:04:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.810 09:04:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.810 09:04:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.810 09:04:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:06.069 09:04:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:06.069 09:04:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:06.069 09:04:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:06.069 09:04:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.069 09:04:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.069 09:04:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:06.069 09:04:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.069 09:04:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.069 09:04:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.069 09:04:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.069 09:04:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.328 09:04:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:06.328 09:04:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:06.328 09:04:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.328 09:04:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:06.328 09:04:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.328 09:04:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:06.328 09:04:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:06.328 09:04:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:06.329 09:04:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:06.329 09:04:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:06.329 09:04:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:06.329 09:04:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:06.329 09:04:24 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:06.897 09:04:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:07.834 [2024-10-15 09:04:25.653528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:08.094 [2024-10-15 09:04:25.769296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.094 [2024-10-15 09:04:25.769297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.094 [2024-10-15 09:04:25.967272] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:08.094 [2024-10-15 09:04:25.967372] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:09.998 spdk_app_start Round 1 00:05:09.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:09.998 09:04:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:09.998 09:04:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:09.998 09:04:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58302 /var/tmp/spdk-nbd.sock 00:05:09.998 09:04:27 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58302 ']' 00:05:09.998 09:04:27 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:09.998 09:04:27 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:09.998 09:04:27 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:09.998 09:04:27 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:09.998 09:04:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:09.998 09:04:27 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:09.998 09:04:27 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:09.998 09:04:27 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.259 Malloc0 00:05:10.259 09:04:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.519 Malloc1 00:05:10.519 09:04:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.519 09:04:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.519 09:04:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.519 09:04:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:10.519 09:04:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.519 09:04:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:10.519 09:04:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.519 09:04:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.519 09:04:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.519 09:04:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:10.519 09:04:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.519 09:04:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:10.519 09:04:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:10.519 09:04:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:10.519 09:04:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.519 09:04:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:10.778 /dev/nbd0 00:05:10.778 09:04:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:10.778 09:04:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:10.778 09:04:28 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:10.778 09:04:28 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:10.778 09:04:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:10.778 09:04:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:10.778 09:04:28 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:10.778 09:04:28 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:10.778 09:04:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:10.778 09:04:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:10.778 09:04:28 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.778 1+0 records in 00:05:10.778 1+0 records out 00:05:10.778 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261868 s, 15.6 MB/s 00:05:10.778 09:04:28 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.778 09:04:28 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:10.778 09:04:28 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.778 09:04:28 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:10.778 09:04:28 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:10.778 09:04:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.778 09:04:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.778 09:04:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:11.037 /dev/nbd1 00:05:11.037 09:04:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:11.037 09:04:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:11.037 09:04:28 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:11.037 09:04:28 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:11.037 09:04:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:11.037 09:04:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:11.037 09:04:28 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:11.037 09:04:28 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:11.037 09:04:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:11.037 09:04:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:11.037 09:04:28 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.037 1+0 records in 00:05:11.037 1+0 records out 00:05:11.037 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365312 s, 11.2 MB/s 00:05:11.037 09:04:28 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.037 09:04:28 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:11.037 09:04:28 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.037 09:04:28 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:11.037 09:04:28 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:11.037 09:04:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.037 09:04:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.037 09:04:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.037 09:04:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.037 09:04:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:11.296 09:04:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:11.296 { 00:05:11.296 "nbd_device": "/dev/nbd0", 00:05:11.296 "bdev_name": "Malloc0" 00:05:11.296 }, 00:05:11.296 { 00:05:11.296 "nbd_device": "/dev/nbd1", 00:05:11.296 "bdev_name": "Malloc1" 00:05:11.296 } 00:05:11.296 ]' 00:05:11.296 09:04:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:11.296 { 00:05:11.296 "nbd_device": "/dev/nbd0", 00:05:11.296 "bdev_name": "Malloc0" 00:05:11.296 }, 00:05:11.296 { 00:05:11.296 "nbd_device": "/dev/nbd1", 00:05:11.296 "bdev_name": "Malloc1" 00:05:11.296 } 00:05:11.296 ]' 00:05:11.296 09:04:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.296 09:04:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:11.296 /dev/nbd1' 00:05:11.296 09:04:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:11.296 /dev/nbd1' 00:05:11.296 09:04:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.296 09:04:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:11.296 09:04:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:11.296 09:04:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:11.296 09:04:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:11.296 09:04:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:11.296 09:04:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.296 09:04:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.296 09:04:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:11.296 09:04:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.296 09:04:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:11.296 09:04:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:11.296 256+0 records in 00:05:11.296 256+0 records out 00:05:11.296 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129258 s, 81.1 MB/s 00:05:11.296 09:04:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.296 09:04:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:11.296 256+0 records in 00:05:11.296 256+0 records out 00:05:11.296 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278029 s, 37.7 MB/s 00:05:11.296 09:04:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.296 09:04:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:11.555 256+0 records in 00:05:11.555 256+0 records out 00:05:11.555 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0332162 s, 31.6 MB/s 00:05:11.555 09:04:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:11.555 09:04:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.555 09:04:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.555 09:04:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:11.555 09:04:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.555 09:04:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:11.555 09:04:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:11.555 09:04:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.555 09:04:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:11.555 09:04:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.556 09:04:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:11.556 09:04:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.556 09:04:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:11.556 09:04:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.556 09:04:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.556 09:04:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:11.556 09:04:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:11.556 09:04:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.556 09:04:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:11.815 09:04:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:11.815 09:04:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:11.815 09:04:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:11.815 09:04:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.815 09:04:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.815 09:04:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:11.815 09:04:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:11.815 09:04:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.815 09:04:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.815 09:04:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:11.815 09:04:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:12.074 09:04:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:12.074 09:04:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:12.074 09:04:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.074 09:04:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.074 09:04:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:12.074 09:04:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.074 09:04:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.074 09:04:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.074 09:04:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.074 09:04:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.333 09:04:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:12.333 09:04:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:12.333 09:04:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.333 09:04:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:12.333 09:04:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.333 09:04:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:12.333 09:04:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:12.333 09:04:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:12.333 09:04:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:12.333 09:04:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:12.333 09:04:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:12.333 09:04:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:12.333 09:04:30 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:12.592 09:04:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:13.969 [2024-10-15 09:04:31.680425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.969 [2024-10-15 09:04:31.818524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.969 [2024-10-15 09:04:31.818526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.228 [2024-10-15 09:04:32.033371] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:14.228 [2024-10-15 09:04:32.033458] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:15.603 spdk_app_start Round 2 00:05:15.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:15.603 09:04:33 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:15.603 09:04:33 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:15.603 09:04:33 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58302 /var/tmp/spdk-nbd.sock 00:05:15.603 09:04:33 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58302 ']' 00:05:15.603 09:04:33 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:15.603 09:04:33 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:15.603 09:04:33 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:15.603 09:04:33 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:15.603 09:04:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:15.861 09:04:33 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:15.861 09:04:33 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:15.861 09:04:33 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.119 Malloc0 00:05:16.119 09:04:33 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.378 Malloc1 00:05:16.636 09:04:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.636 09:04:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.636 09:04:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.636 09:04:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:16.636 09:04:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.636 09:04:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:16.636 09:04:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.636 09:04:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.636 09:04:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.636 09:04:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:16.636 09:04:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.636 09:04:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:16.636 09:04:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:16.636 09:04:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:16.636 09:04:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.636 09:04:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:16.636 /dev/nbd0 00:05:16.636 09:04:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:16.894 09:04:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:16.894 09:04:34 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:16.894 09:04:34 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:16.894 09:04:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:16.894 09:04:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:16.894 09:04:34 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:16.894 09:04:34 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:16.894 09:04:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:16.894 09:04:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:16.894 09:04:34 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.894 1+0 records in 00:05:16.894 1+0 records out 00:05:16.894 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426205 s, 9.6 MB/s 00:05:16.894 09:04:34 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.894 09:04:34 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:16.894 09:04:34 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.894 09:04:34 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:16.894 09:04:34 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:16.894 09:04:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:16.894 09:04:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.894 09:04:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:16.894 /dev/nbd1 00:05:17.153 09:04:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:17.153 09:04:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:17.153 09:04:34 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:17.153 09:04:34 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:17.153 09:04:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:17.153 09:04:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:17.153 09:04:34 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:17.153 09:04:34 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:17.153 09:04:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:17.153 09:04:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:17.153 09:04:34 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.153 1+0 records in 00:05:17.153 1+0 records out 00:05:17.153 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415348 s, 9.9 MB/s 00:05:17.153 09:04:34 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.153 09:04:34 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:17.153 09:04:34 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.153 09:04:34 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:17.153 09:04:34 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:17.153 09:04:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.153 09:04:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.153 09:04:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.153 09:04:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.153 09:04:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:17.411 { 00:05:17.411 "nbd_device": "/dev/nbd0", 00:05:17.411 "bdev_name": "Malloc0" 00:05:17.411 }, 00:05:17.411 { 00:05:17.411 "nbd_device": "/dev/nbd1", 00:05:17.411 "bdev_name": "Malloc1" 00:05:17.411 } 00:05:17.411 ]' 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:17.411 { 00:05:17.411 "nbd_device": "/dev/nbd0", 00:05:17.411 "bdev_name": "Malloc0" 00:05:17.411 }, 00:05:17.411 { 00:05:17.411 "nbd_device": "/dev/nbd1", 00:05:17.411 "bdev_name": "Malloc1" 00:05:17.411 } 00:05:17.411 ]' 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:17.411 /dev/nbd1' 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:17.411 /dev/nbd1' 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:17.411 256+0 records in 00:05:17.411 256+0 records out 00:05:17.411 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150674 s, 69.6 MB/s 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:17.411 256+0 records in 00:05:17.411 256+0 records out 00:05:17.411 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0290925 s, 36.0 MB/s 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:17.411 256+0 records in 00:05:17.411 256+0 records out 00:05:17.411 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0285328 s, 36.7 MB/s 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.411 09:04:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:17.669 09:04:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:17.669 09:04:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:17.669 09:04:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:17.669 09:04:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.669 09:04:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.669 09:04:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:17.669 09:04:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:17.669 09:04:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.669 09:04:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.669 09:04:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:17.926 09:04:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:17.926 09:04:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:17.926 09:04:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:17.926 09:04:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.926 09:04:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.926 09:04:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:17.926 09:04:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:17.926 09:04:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.926 09:04:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.926 09:04:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.926 09:04:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.185 09:04:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:18.185 09:04:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.185 09:04:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:18.185 09:04:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:18.185 09:04:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:18.185 09:04:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.185 09:04:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:18.185 09:04:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:18.185 09:04:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:18.185 09:04:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:18.185 09:04:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:18.185 09:04:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:18.185 09:04:36 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:18.754 09:04:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:20.131 [2024-10-15 09:04:37.811840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:20.131 [2024-10-15 09:04:37.926380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.131 [2024-10-15 09:04:37.926383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.390 [2024-10-15 09:04:38.133405] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:20.390 [2024-10-15 09:04:38.133500] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:21.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:21.809 09:04:39 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58302 /var/tmp/spdk-nbd.sock 00:05:21.809 09:04:39 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58302 ']' 00:05:21.809 09:04:39 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:21.809 09:04:39 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:21.809 09:04:39 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:21.809 09:04:39 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:21.809 09:04:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.068 09:04:39 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.068 09:04:39 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:22.068 09:04:39 event.app_repeat -- event/event.sh@39 -- # killprocess 58302 00:05:22.068 09:04:39 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58302 ']' 00:05:22.068 09:04:39 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58302 00:05:22.068 09:04:39 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:22.068 09:04:39 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:22.068 09:04:39 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58302 00:05:22.068 killing process with pid 58302 00:05:22.068 09:04:39 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:22.068 09:04:39 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:22.068 09:04:39 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58302' 00:05:22.068 09:04:39 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58302 00:05:22.068 09:04:39 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58302 00:05:23.006 spdk_app_start is called in Round 0. 00:05:23.006 Shutdown signal received, stop current app iteration 00:05:23.006 Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 reinitialization... 00:05:23.006 spdk_app_start is called in Round 1. 00:05:23.006 Shutdown signal received, stop current app iteration 00:05:23.006 Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 reinitialization... 00:05:23.006 spdk_app_start is called in Round 2. 00:05:23.006 Shutdown signal received, stop current app iteration 00:05:23.006 Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 reinitialization... 00:05:23.006 spdk_app_start is called in Round 3. 00:05:23.006 Shutdown signal received, stop current app iteration 00:05:23.006 09:04:40 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:23.006 09:04:40 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:23.006 00:05:23.006 real 0m20.068s 00:05:23.006 user 0m43.610s 00:05:23.006 sys 0m2.606s 00:05:23.006 09:04:40 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.006 09:04:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:23.006 ************************************ 00:05:23.006 END TEST app_repeat 00:05:23.006 ************************************ 00:05:23.266 09:04:40 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:23.266 09:04:40 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:23.266 09:04:40 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.266 09:04:40 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.266 09:04:40 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.266 ************************************ 00:05:23.266 START TEST cpu_locks 00:05:23.266 ************************************ 00:05:23.266 09:04:40 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:23.266 * Looking for test storage... 00:05:23.266 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:23.266 09:04:41 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:23.266 09:04:41 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:23.266 09:04:41 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:23.525 09:04:41 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:23.525 09:04:41 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.525 09:04:41 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.525 09:04:41 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.525 09:04:41 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.525 09:04:41 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.525 09:04:41 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.525 09:04:41 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.525 09:04:41 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.525 09:04:41 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.525 09:04:41 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.525 09:04:41 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.525 09:04:41 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:23.525 09:04:41 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:23.525 09:04:41 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.525 09:04:41 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.525 09:04:41 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:23.525 09:04:41 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:23.525 09:04:41 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.525 09:04:41 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:23.525 09:04:41 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.525 09:04:41 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:23.525 09:04:41 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:23.525 09:04:41 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.525 09:04:41 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:23.525 09:04:41 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.525 09:04:41 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.525 09:04:41 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.525 09:04:41 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:23.525 09:04:41 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.525 09:04:41 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:23.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.525 --rc genhtml_branch_coverage=1 00:05:23.525 --rc genhtml_function_coverage=1 00:05:23.525 --rc genhtml_legend=1 00:05:23.525 --rc geninfo_all_blocks=1 00:05:23.525 --rc geninfo_unexecuted_blocks=1 00:05:23.525 00:05:23.525 ' 00:05:23.525 09:04:41 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:23.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.525 --rc genhtml_branch_coverage=1 00:05:23.525 --rc genhtml_function_coverage=1 00:05:23.525 --rc genhtml_legend=1 00:05:23.525 --rc geninfo_all_blocks=1 00:05:23.525 --rc geninfo_unexecuted_blocks=1 00:05:23.525 00:05:23.525 ' 00:05:23.525 09:04:41 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:23.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.525 --rc genhtml_branch_coverage=1 00:05:23.525 --rc genhtml_function_coverage=1 00:05:23.525 --rc genhtml_legend=1 00:05:23.525 --rc geninfo_all_blocks=1 00:05:23.525 --rc geninfo_unexecuted_blocks=1 00:05:23.525 00:05:23.525 ' 00:05:23.525 09:04:41 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:23.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.525 --rc genhtml_branch_coverage=1 00:05:23.525 --rc genhtml_function_coverage=1 00:05:23.525 --rc genhtml_legend=1 00:05:23.525 --rc geninfo_all_blocks=1 00:05:23.525 --rc geninfo_unexecuted_blocks=1 00:05:23.525 00:05:23.525 ' 00:05:23.525 09:04:41 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:23.525 09:04:41 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:23.525 09:04:41 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:23.525 09:04:41 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:23.525 09:04:41 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.525 09:04:41 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.525 09:04:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.525 ************************************ 00:05:23.525 START TEST default_locks 00:05:23.525 ************************************ 00:05:23.525 09:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:23.525 09:04:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58755 00:05:23.525 09:04:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.525 09:04:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58755 00:05:23.525 09:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58755 ']' 00:05:23.525 09:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.525 09:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:23.525 09:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.525 09:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:23.525 09:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.525 [2024-10-15 09:04:41.332766] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:05:23.525 [2024-10-15 09:04:41.332935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58755 ] 00:05:23.784 [2024-10-15 09:04:41.507808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.784 [2024-10-15 09:04:41.630987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.717 09:04:42 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:24.717 09:04:42 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:24.717 09:04:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58755 00:05:24.717 09:04:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58755 00:05:24.717 09:04:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:25.284 09:04:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58755 00:05:25.284 09:04:43 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 58755 ']' 00:05:25.284 09:04:43 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 58755 00:05:25.284 09:04:43 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:25.284 09:04:43 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:25.284 09:04:43 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58755 00:05:25.284 09:04:43 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:25.284 09:04:43 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:25.284 killing process with pid 58755 00:05:25.284 09:04:43 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58755' 00:05:25.284 09:04:43 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 58755 00:05:25.284 09:04:43 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 58755 00:05:27.819 09:04:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58755 00:05:27.819 09:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:27.819 09:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58755 00:05:27.819 09:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:27.819 09:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:27.819 09:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:27.819 09:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:27.819 09:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58755 00:05:27.819 09:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58755 ']' 00:05:27.819 09:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.819 09:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:27.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.819 09:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.819 09:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:27.819 09:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.819 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58755) - No such process 00:05:27.819 ERROR: process (pid: 58755) is no longer running 00:05:27.819 09:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.819 09:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:27.819 09:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:27.819 09:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:27.819 09:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:27.819 09:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:27.819 09:04:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:27.819 09:04:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:27.819 09:04:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:27.819 09:04:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:27.819 00:05:27.819 real 0m4.369s 00:05:27.819 user 0m4.384s 00:05:27.819 sys 0m0.741s 00:05:27.819 09:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.819 09:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.819 ************************************ 00:05:27.819 END TEST default_locks 00:05:27.819 ************************************ 00:05:27.819 09:04:45 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:27.819 09:04:45 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.819 09:04:45 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.819 09:04:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.819 ************************************ 00:05:27.819 START TEST default_locks_via_rpc 00:05:27.819 ************************************ 00:05:27.819 09:04:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:27.819 09:04:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58830 00:05:27.819 09:04:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58830 00:05:27.819 09:04:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.819 09:04:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58830 ']' 00:05:27.819 09:04:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.819 09:04:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:27.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.819 09:04:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.819 09:04:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:27.819 09:04:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.078 [2024-10-15 09:04:45.762264] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:05:28.078 [2024-10-15 09:04:45.762411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58830 ] 00:05:28.078 [2024-10-15 09:04:45.931511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.338 [2024-10-15 09:04:46.052875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.275 09:04:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:29.275 09:04:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:29.275 09:04:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:29.275 09:04:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.275 09:04:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.275 09:04:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.275 09:04:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:29.275 09:04:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:29.275 09:04:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:29.275 09:04:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:29.275 09:04:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:29.275 09:04:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.275 09:04:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.275 09:04:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.275 09:04:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58830 00:05:29.275 09:04:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58830 00:05:29.275 09:04:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.534 09:04:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58830 00:05:29.534 09:04:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 58830 ']' 00:05:29.534 09:04:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 58830 00:05:29.534 09:04:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:29.534 09:04:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:29.534 09:04:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58830 00:05:29.534 09:04:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:29.534 09:04:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:29.534 killing process with pid 58830 00:05:29.534 09:04:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58830' 00:05:29.534 09:04:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 58830 00:05:29.534 09:04:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 58830 00:05:32.071 00:05:32.071 real 0m4.148s 00:05:32.071 user 0m4.097s 00:05:32.071 sys 0m0.624s 00:05:32.071 09:04:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.071 09:04:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.071 ************************************ 00:05:32.071 END TEST default_locks_via_rpc 00:05:32.071 ************************************ 00:05:32.071 09:04:49 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:32.071 09:04:49 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.071 09:04:49 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.071 09:04:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.071 ************************************ 00:05:32.071 START TEST non_locking_app_on_locked_coremask 00:05:32.071 ************************************ 00:05:32.071 09:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:32.071 09:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.071 09:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58904 00:05:32.071 09:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58904 /var/tmp/spdk.sock 00:05:32.071 09:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58904 ']' 00:05:32.071 09:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.071 09:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.071 09:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.071 09:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.071 09:04:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.331 [2024-10-15 09:04:49.981749] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:05:32.331 [2024-10-15 09:04:49.981923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58904 ] 00:05:32.331 [2024-10-15 09:04:50.156295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.590 [2024-10-15 09:04:50.283434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.529 09:04:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.529 09:04:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:33.529 09:04:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58925 00:05:33.529 09:04:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:33.529 09:04:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58925 /var/tmp/spdk2.sock 00:05:33.529 09:04:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58925 ']' 00:05:33.529 09:04:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:33.529 09:04:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.529 09:04:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:33.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:33.529 09:04:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.529 09:04:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.529 [2024-10-15 09:04:51.363231] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:05:33.529 [2024-10-15 09:04:51.363398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58925 ] 00:05:33.789 [2024-10-15 09:04:51.532553] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:33.789 [2024-10-15 09:04:51.532616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.049 [2024-10-15 09:04:51.796266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.587 09:04:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:36.587 09:04:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:36.587 09:04:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58904 00:05:36.587 09:04:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58904 00:05:36.587 09:04:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:36.587 09:04:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58904 00:05:36.587 09:04:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58904 ']' 00:05:36.587 09:04:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58904 00:05:36.587 09:04:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:36.587 09:04:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:36.587 09:04:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58904 00:05:36.587 09:04:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:36.587 09:04:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:36.587 09:04:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58904' 00:05:36.587 killing process with pid 58904 00:05:36.587 09:04:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58904 00:05:36.587 09:04:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58904 00:05:43.164 09:04:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58925 00:05:43.164 09:04:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58925 ']' 00:05:43.164 09:04:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58925 00:05:43.164 09:04:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:43.164 09:04:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:43.164 09:04:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58925 00:05:43.164 killing process with pid 58925 00:05:43.164 09:04:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:43.164 09:04:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:43.164 09:04:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58925' 00:05:43.164 09:04:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58925 00:05:43.164 09:04:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58925 00:05:45.080 00:05:45.080 real 0m13.033s 00:05:45.080 user 0m13.337s 00:05:45.080 sys 0m1.359s 00:05:45.080 09:05:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.080 09:05:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.080 ************************************ 00:05:45.080 END TEST non_locking_app_on_locked_coremask 00:05:45.080 ************************************ 00:05:45.080 09:05:02 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:45.080 09:05:02 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.080 09:05:02 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.080 09:05:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.080 ************************************ 00:05:45.080 START TEST locking_app_on_unlocked_coremask 00:05:45.080 ************************************ 00:05:45.080 09:05:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:45.080 09:05:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59093 00:05:45.080 09:05:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:45.080 09:05:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59093 /var/tmp/spdk.sock 00:05:45.080 09:05:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59093 ']' 00:05:45.080 09:05:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.080 09:05:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.080 09:05:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.080 09:05:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.080 09:05:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.340 [2024-10-15 09:05:03.073300] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:05:45.340 [2024-10-15 09:05:03.073506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59093 ] 00:05:45.600 [2024-10-15 09:05:03.252752] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:45.600 [2024-10-15 09:05:03.252872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.600 [2024-10-15 09:05:03.401730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.978 09:05:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.978 09:05:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:46.978 09:05:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59109 00:05:46.978 09:05:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59109 /var/tmp/spdk2.sock 00:05:46.978 09:05:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:46.978 09:05:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59109 ']' 00:05:46.978 09:05:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.978 09:05:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.978 09:05:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.978 09:05:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.978 09:05:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.978 [2024-10-15 09:05:04.658810] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:05:46.978 [2024-10-15 09:05:04.659665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59109 ] 00:05:46.978 [2024-10-15 09:05:04.828033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.547 [2024-10-15 09:05:05.152239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.449 09:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.449 09:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:49.449 09:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59109 00:05:49.449 09:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59109 00:05:49.449 09:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:50.019 09:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59093 00:05:50.019 09:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59093 ']' 00:05:50.019 09:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59093 00:05:50.019 09:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:50.019 09:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:50.019 09:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59093 00:05:50.019 09:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:50.019 09:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:50.019 killing process with pid 59093 00:05:50.019 09:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59093' 00:05:50.019 09:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59093 00:05:50.019 09:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59093 00:05:55.346 09:05:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59109 00:05:55.346 09:05:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59109 ']' 00:05:55.346 09:05:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59109 00:05:55.605 09:05:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:55.605 09:05:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:55.605 09:05:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59109 00:05:55.605 killing process with pid 59109 00:05:55.605 09:05:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:55.605 09:05:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:55.605 09:05:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59109' 00:05:55.605 09:05:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59109 00:05:55.605 09:05:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59109 00:05:58.156 00:05:58.156 real 0m12.972s 00:05:58.156 user 0m13.116s 00:05:58.156 sys 0m1.603s 00:05:58.156 09:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.156 09:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.156 ************************************ 00:05:58.156 END TEST locking_app_on_unlocked_coremask 00:05:58.156 ************************************ 00:05:58.156 09:05:15 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:58.156 09:05:15 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.156 09:05:15 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.156 09:05:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.156 ************************************ 00:05:58.156 START TEST locking_app_on_locked_coremask 00:05:58.156 ************************************ 00:05:58.156 09:05:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:58.156 09:05:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59275 00:05:58.156 09:05:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.156 09:05:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59275 /var/tmp/spdk.sock 00:05:58.156 09:05:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59275 ']' 00:05:58.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.156 09:05:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.156 09:05:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.156 09:05:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.156 09:05:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.156 09:05:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.440 [2024-10-15 09:05:16.102038] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:05:58.440 [2024-10-15 09:05:16.102214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59275 ] 00:05:58.440 [2024-10-15 09:05:16.272083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.698 [2024-10-15 09:05:16.404904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.634 09:05:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.634 09:05:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:59.634 09:05:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59292 00:05:59.634 09:05:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59292 /var/tmp/spdk2.sock 00:05:59.634 09:05:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:59.634 09:05:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:59.634 09:05:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59292 /var/tmp/spdk2.sock 00:05:59.634 09:05:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:59.634 09:05:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.634 09:05:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:59.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.634 09:05:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.634 09:05:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59292 /var/tmp/spdk2.sock 00:05:59.634 09:05:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59292 ']' 00:05:59.634 09:05:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.634 09:05:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:59.634 09:05:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.634 09:05:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:59.634 09:05:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.894 [2024-10-15 09:05:17.538516] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:05:59.894 [2024-10-15 09:05:17.538712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59292 ] 00:05:59.894 [2024-10-15 09:05:17.708923] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59275 has claimed it. 00:05:59.894 [2024-10-15 09:05:17.709017] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:00.462 ERROR: process (pid: 59292) is no longer running 00:06:00.462 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59292) - No such process 00:06:00.462 09:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.462 09:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:00.462 09:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:00.462 09:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:00.462 09:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:00.462 09:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:00.462 09:05:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59275 00:06:00.462 09:05:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59275 00:06:00.462 09:05:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.721 09:05:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59275 00:06:00.721 09:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59275 ']' 00:06:00.721 09:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59275 00:06:00.721 09:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:00.721 09:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.721 09:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59275 00:06:00.721 killing process with pid 59275 00:06:00.721 09:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.721 09:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.721 09:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59275' 00:06:00.721 09:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59275 00:06:00.721 09:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59275 00:06:04.019 00:06:04.019 real 0m5.237s 00:06:04.019 user 0m5.447s 00:06:04.019 sys 0m0.816s 00:06:04.019 09:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.019 09:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.019 ************************************ 00:06:04.019 END TEST locking_app_on_locked_coremask 00:06:04.019 ************************************ 00:06:04.019 09:05:21 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:04.019 09:05:21 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.019 09:05:21 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.019 09:05:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.019 ************************************ 00:06:04.019 START TEST locking_overlapped_coremask 00:06:04.019 ************************************ 00:06:04.019 09:05:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:04.019 09:05:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59367 00:06:04.019 09:05:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:04.019 09:05:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59367 /var/tmp/spdk.sock 00:06:04.019 09:05:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59367 ']' 00:06:04.019 09:05:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.019 09:05:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.019 09:05:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.019 09:05:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.019 09:05:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.019 [2024-10-15 09:05:21.407175] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:06:04.019 [2024-10-15 09:05:21.407319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59367 ] 00:06:04.019 [2024-10-15 09:05:21.578180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:04.019 [2024-10-15 09:05:21.717472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.019 [2024-10-15 09:05:21.717621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.019 [2024-10-15 09:05:21.717662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.955 09:05:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.955 09:05:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:04.955 09:05:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:04.955 09:05:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59385 00:06:04.955 09:05:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59385 /var/tmp/spdk2.sock 00:06:04.956 09:05:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:04.956 09:05:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59385 /var/tmp/spdk2.sock 00:06:04.956 09:05:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:04.956 09:05:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.956 09:05:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:04.956 09:05:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.956 09:05:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59385 /var/tmp/spdk2.sock 00:06:04.956 09:05:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59385 ']' 00:06:04.956 09:05:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.956 09:05:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.956 09:05:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.956 09:05:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.956 09:05:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.956 [2024-10-15 09:05:22.834320] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:06:04.956 [2024-10-15 09:05:22.834469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59385 ] 00:06:05.215 [2024-10-15 09:05:23.008890] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59367 has claimed it. 00:06:05.215 [2024-10-15 09:05:23.008979] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:05.782 ERROR: process (pid: 59385) is no longer running 00:06:05.782 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59385) - No such process 00:06:05.782 09:05:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.782 09:05:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:05.782 09:05:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:05.782 09:05:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:05.782 09:05:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:05.782 09:05:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:05.782 09:05:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:05.782 09:05:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:05.782 09:05:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:05.782 09:05:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:05.782 09:05:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59367 00:06:05.782 09:05:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 59367 ']' 00:06:05.782 09:05:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 59367 00:06:05.782 09:05:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:05.782 09:05:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:05.782 09:05:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59367 00:06:05.782 09:05:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:05.782 09:05:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:05.782 killing process with pid 59367 00:06:05.782 09:05:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59367' 00:06:05.782 09:05:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 59367 00:06:05.782 09:05:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 59367 00:06:09.118 00:06:09.118 real 0m4.985s 00:06:09.118 user 0m13.505s 00:06:09.118 sys 0m0.670s 00:06:09.118 09:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.118 09:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.118 ************************************ 00:06:09.118 END TEST locking_overlapped_coremask 00:06:09.118 ************************************ 00:06:09.118 09:05:26 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:09.118 09:05:26 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.118 09:05:26 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.118 09:05:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.118 ************************************ 00:06:09.118 START TEST locking_overlapped_coremask_via_rpc 00:06:09.118 ************************************ 00:06:09.118 09:05:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:09.118 09:05:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59460 00:06:09.118 09:05:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:09.118 09:05:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59460 /var/tmp/spdk.sock 00:06:09.118 09:05:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59460 ']' 00:06:09.118 09:05:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.118 09:05:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.118 09:05:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.118 09:05:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.118 09:05:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.118 [2024-10-15 09:05:26.459551] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:06:09.118 [2024-10-15 09:05:26.459709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59460 ] 00:06:09.118 [2024-10-15 09:05:26.632679] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.118 [2024-10-15 09:05:26.632776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.118 [2024-10-15 09:05:26.776436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.118 [2024-10-15 09:05:26.776558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.118 [2024-10-15 09:05:26.776598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.056 09:05:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.056 09:05:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:10.056 09:05:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:10.056 09:05:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59478 00:06:10.056 09:05:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59478 /var/tmp/spdk2.sock 00:06:10.056 09:05:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59478 ']' 00:06:10.056 09:05:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.056 09:05:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.056 09:05:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.056 09:05:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.056 09:05:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.056 [2024-10-15 09:05:27.908001] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:06:10.056 [2024-10-15 09:05:27.908150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59478 ] 00:06:10.314 [2024-10-15 09:05:28.076411] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:10.314 [2024-10-15 09:05:28.076510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.573 [2024-10-15 09:05:28.352095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:10.573 [2024-10-15 09:05:28.352160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.573 [2024-10-15 09:05:28.352162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:13.107 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.107 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:13.107 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:13.107 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.107 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.107 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.107 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:13.107 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:13.108 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:13.108 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:13.108 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.108 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:13.108 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.108 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:13.108 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.108 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.108 [2024-10-15 09:05:30.587938] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59460 has claimed it. 00:06:13.108 request: 00:06:13.108 { 00:06:13.108 "method": "framework_enable_cpumask_locks", 00:06:13.108 "req_id": 1 00:06:13.108 } 00:06:13.108 Got JSON-RPC error response 00:06:13.108 response: 00:06:13.108 { 00:06:13.108 "code": -32603, 00:06:13.108 "message": "Failed to claim CPU core: 2" 00:06:13.108 } 00:06:13.108 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:13.108 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:13.108 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:13.108 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:13.108 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:13.108 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59460 /var/tmp/spdk.sock 00:06:13.108 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59460 ']' 00:06:13.108 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.108 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.108 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.108 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.108 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.108 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.108 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:13.108 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59478 /var/tmp/spdk2.sock 00:06:13.108 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59478 ']' 00:06:13.108 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.108 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.108 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.108 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.108 09:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.366 09:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.366 09:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:13.366 09:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:13.366 09:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:13.366 09:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:13.366 09:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:13.366 00:06:13.366 real 0m4.844s 00:06:13.366 user 0m1.527s 00:06:13.366 sys 0m0.266s 00:06:13.366 09:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.366 ************************************ 00:06:13.366 END TEST locking_overlapped_coremask_via_rpc 00:06:13.366 09:05:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.366 ************************************ 00:06:13.366 09:05:31 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:13.366 09:05:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59460 ]] 00:06:13.366 09:05:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59460 00:06:13.366 09:05:31 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59460 ']' 00:06:13.366 09:05:31 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59460 00:06:13.366 09:05:31 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:13.366 09:05:31 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.366 09:05:31 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59460 00:06:13.624 09:05:31 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.624 09:05:31 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.624 killing process with pid 59460 00:06:13.624 09:05:31 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59460' 00:06:13.624 09:05:31 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59460 00:06:13.624 09:05:31 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59460 00:06:16.159 09:05:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59478 ]] 00:06:16.159 09:05:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59478 00:06:16.159 09:05:34 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59478 ']' 00:06:16.159 09:05:34 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59478 00:06:16.159 09:05:34 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:16.159 09:05:34 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:16.159 09:05:34 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59478 00:06:16.418 09:05:34 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:16.418 09:05:34 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:16.418 killing process with pid 59478 00:06:16.418 09:05:34 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59478' 00:06:16.418 09:05:34 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59478 00:06:16.418 09:05:34 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59478 00:06:19.738 09:05:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:19.738 09:05:37 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:19.738 09:05:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59460 ]] 00:06:19.738 09:05:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59460 00:06:19.738 09:05:37 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59460 ']' 00:06:19.738 09:05:37 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59460 00:06:19.738 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59460) - No such process 00:06:19.738 Process with pid 59460 is not found 00:06:19.739 09:05:37 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59460 is not found' 00:06:19.739 09:05:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59478 ]] 00:06:19.739 09:05:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59478 00:06:19.739 09:05:37 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59478 ']' 00:06:19.739 09:05:37 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59478 00:06:19.739 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59478) - No such process 00:06:19.739 Process with pid 59478 is not found 00:06:19.739 09:05:37 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59478 is not found' 00:06:19.739 09:05:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:19.739 00:06:19.739 real 0m56.255s 00:06:19.739 user 1m36.737s 00:06:19.739 sys 0m7.405s 00:06:19.739 09:05:37 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.739 09:05:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.739 ************************************ 00:06:19.739 END TEST cpu_locks 00:06:19.739 ************************************ 00:06:19.739 00:06:19.739 real 1m26.080s 00:06:19.739 user 2m35.138s 00:06:19.739 sys 0m11.258s 00:06:19.739 09:05:37 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.739 09:05:37 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.739 ************************************ 00:06:19.739 END TEST event 00:06:19.739 ************************************ 00:06:19.739 09:05:37 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:19.739 09:05:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.739 09:05:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.739 09:05:37 -- common/autotest_common.sh@10 -- # set +x 00:06:19.739 ************************************ 00:06:19.739 START TEST thread 00:06:19.739 ************************************ 00:06:19.739 09:05:37 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:19.739 * Looking for test storage... 00:06:19.739 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:19.739 09:05:37 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:19.739 09:05:37 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:06:19.739 09:05:37 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:19.739 09:05:37 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:19.739 09:05:37 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.739 09:05:37 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.739 09:05:37 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.739 09:05:37 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.739 09:05:37 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.739 09:05:37 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.739 09:05:37 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.739 09:05:37 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.739 09:05:37 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.739 09:05:37 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.739 09:05:37 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.739 09:05:37 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:19.739 09:05:37 thread -- scripts/common.sh@345 -- # : 1 00:06:19.739 09:05:37 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.739 09:05:37 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.739 09:05:37 thread -- scripts/common.sh@365 -- # decimal 1 00:06:19.739 09:05:37 thread -- scripts/common.sh@353 -- # local d=1 00:06:19.739 09:05:37 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.739 09:05:37 thread -- scripts/common.sh@355 -- # echo 1 00:06:19.739 09:05:37 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.739 09:05:37 thread -- scripts/common.sh@366 -- # decimal 2 00:06:19.739 09:05:37 thread -- scripts/common.sh@353 -- # local d=2 00:06:19.739 09:05:37 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.739 09:05:37 thread -- scripts/common.sh@355 -- # echo 2 00:06:19.739 09:05:37 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.739 09:05:37 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.739 09:05:37 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.739 09:05:37 thread -- scripts/common.sh@368 -- # return 0 00:06:19.739 09:05:37 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.739 09:05:37 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:19.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.739 --rc genhtml_branch_coverage=1 00:06:19.739 --rc genhtml_function_coverage=1 00:06:19.739 --rc genhtml_legend=1 00:06:19.739 --rc geninfo_all_blocks=1 00:06:19.739 --rc geninfo_unexecuted_blocks=1 00:06:19.739 00:06:19.739 ' 00:06:19.739 09:05:37 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:19.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.739 --rc genhtml_branch_coverage=1 00:06:19.739 --rc genhtml_function_coverage=1 00:06:19.739 --rc genhtml_legend=1 00:06:19.739 --rc geninfo_all_blocks=1 00:06:19.739 --rc geninfo_unexecuted_blocks=1 00:06:19.739 00:06:19.739 ' 00:06:19.739 09:05:37 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:19.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.739 --rc genhtml_branch_coverage=1 00:06:19.739 --rc genhtml_function_coverage=1 00:06:19.739 --rc genhtml_legend=1 00:06:19.739 --rc geninfo_all_blocks=1 00:06:19.739 --rc geninfo_unexecuted_blocks=1 00:06:19.739 00:06:19.739 ' 00:06:19.739 09:05:37 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:19.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.739 --rc genhtml_branch_coverage=1 00:06:19.739 --rc genhtml_function_coverage=1 00:06:19.739 --rc genhtml_legend=1 00:06:19.739 --rc geninfo_all_blocks=1 00:06:19.739 --rc geninfo_unexecuted_blocks=1 00:06:19.739 00:06:19.739 ' 00:06:19.739 09:05:37 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:19.739 09:05:37 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:19.739 09:05:37 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.739 09:05:37 thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.739 ************************************ 00:06:19.739 START TEST thread_poller_perf 00:06:19.739 ************************************ 00:06:19.739 09:05:37 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:19.739 [2024-10-15 09:05:37.603392] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:06:19.739 [2024-10-15 09:05:37.603503] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59686 ] 00:06:19.998 [2024-10-15 09:05:37.779496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.258 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:20.258 [2024-10-15 09:05:37.895981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.635 [2024-10-15T09:05:39.531Z] ====================================== 00:06:21.635 [2024-10-15T09:05:39.531Z] busy:2300208008 (cyc) 00:06:21.635 [2024-10-15T09:05:39.531Z] total_run_count: 380000 00:06:21.635 [2024-10-15T09:05:39.531Z] tsc_hz: 2290000000 (cyc) 00:06:21.635 [2024-10-15T09:05:39.531Z] ====================================== 00:06:21.635 [2024-10-15T09:05:39.531Z] poller_cost: 6053 (cyc), 2643 (nsec) 00:06:21.635 00:06:21.635 real 0m1.584s 00:06:21.635 user 0m1.375s 00:06:21.635 sys 0m0.101s 00:06:21.635 09:05:39 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.635 09:05:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:21.635 ************************************ 00:06:21.635 END TEST thread_poller_perf 00:06:21.635 ************************************ 00:06:21.635 09:05:39 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:21.635 09:05:39 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:21.635 09:05:39 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.635 09:05:39 thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.635 ************************************ 00:06:21.635 START TEST thread_poller_perf 00:06:21.635 ************************************ 00:06:21.635 09:05:39 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:21.635 [2024-10-15 09:05:39.252048] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:06:21.635 [2024-10-15 09:05:39.252202] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59728 ] 00:06:21.635 [2024-10-15 09:05:39.422456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.893 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:21.893 [2024-10-15 09:05:39.566257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.272 [2024-10-15T09:05:41.168Z] ====================================== 00:06:23.272 [2024-10-15T09:05:41.168Z] busy:2293897896 (cyc) 00:06:23.272 [2024-10-15T09:05:41.168Z] total_run_count: 4620000 00:06:23.272 [2024-10-15T09:05:41.168Z] tsc_hz: 2290000000 (cyc) 00:06:23.272 [2024-10-15T09:05:41.168Z] ====================================== 00:06:23.272 [2024-10-15T09:05:41.168Z] poller_cost: 496 (cyc), 216 (nsec) 00:06:23.272 00:06:23.272 real 0m1.627s 00:06:23.272 user 0m1.403s 00:06:23.272 sys 0m0.115s 00:06:23.272 09:05:40 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.272 09:05:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:23.272 ************************************ 00:06:23.272 END TEST thread_poller_perf 00:06:23.272 ************************************ 00:06:23.272 09:05:40 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:23.272 ************************************ 00:06:23.272 END TEST thread 00:06:23.272 ************************************ 00:06:23.272 00:06:23.272 real 0m3.551s 00:06:23.272 user 0m2.946s 00:06:23.272 sys 0m0.403s 00:06:23.272 09:05:40 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.272 09:05:40 thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.272 09:05:40 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:23.272 09:05:40 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:23.272 09:05:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.272 09:05:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.272 09:05:40 -- common/autotest_common.sh@10 -- # set +x 00:06:23.272 ************************************ 00:06:23.272 START TEST app_cmdline 00:06:23.272 ************************************ 00:06:23.272 09:05:40 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:23.272 * Looking for test storage... 00:06:23.272 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:23.272 09:05:41 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:23.272 09:05:41 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:06:23.272 09:05:41 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:23.272 09:05:41 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:23.272 09:05:41 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.272 09:05:41 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.272 09:05:41 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.272 09:05:41 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.272 09:05:41 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.272 09:05:41 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.272 09:05:41 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.272 09:05:41 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.272 09:05:41 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.272 09:05:41 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.272 09:05:41 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.272 09:05:41 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:23.272 09:05:41 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:23.272 09:05:41 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.272 09:05:41 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.272 09:05:41 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:23.272 09:05:41 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:23.272 09:05:41 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.272 09:05:41 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:23.272 09:05:41 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.272 09:05:41 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:23.272 09:05:41 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:23.272 09:05:41 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.272 09:05:41 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:23.272 09:05:41 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.272 09:05:41 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.272 09:05:41 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.272 09:05:41 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:23.272 09:05:41 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.272 09:05:41 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:23.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.272 --rc genhtml_branch_coverage=1 00:06:23.272 --rc genhtml_function_coverage=1 00:06:23.272 --rc genhtml_legend=1 00:06:23.272 --rc geninfo_all_blocks=1 00:06:23.272 --rc geninfo_unexecuted_blocks=1 00:06:23.272 00:06:23.272 ' 00:06:23.272 09:05:41 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:23.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.272 --rc genhtml_branch_coverage=1 00:06:23.272 --rc genhtml_function_coverage=1 00:06:23.272 --rc genhtml_legend=1 00:06:23.272 --rc geninfo_all_blocks=1 00:06:23.272 --rc geninfo_unexecuted_blocks=1 00:06:23.272 00:06:23.272 ' 00:06:23.272 09:05:41 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:23.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.272 --rc genhtml_branch_coverage=1 00:06:23.272 --rc genhtml_function_coverage=1 00:06:23.272 --rc genhtml_legend=1 00:06:23.272 --rc geninfo_all_blocks=1 00:06:23.272 --rc geninfo_unexecuted_blocks=1 00:06:23.272 00:06:23.272 ' 00:06:23.272 09:05:41 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:23.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.272 --rc genhtml_branch_coverage=1 00:06:23.272 --rc genhtml_function_coverage=1 00:06:23.272 --rc genhtml_legend=1 00:06:23.273 --rc geninfo_all_blocks=1 00:06:23.273 --rc geninfo_unexecuted_blocks=1 00:06:23.273 00:06:23.273 ' 00:06:23.532 09:05:41 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:23.532 09:05:41 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59816 00:06:23.532 09:05:41 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:23.532 09:05:41 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59816 00:06:23.532 09:05:41 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 59816 ']' 00:06:23.532 09:05:41 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.532 09:05:41 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.532 09:05:41 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.532 09:05:41 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.532 09:05:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:23.532 [2024-10-15 09:05:41.270950] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:06:23.532 [2024-10-15 09:05:41.271084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59816 ] 00:06:23.792 [2024-10-15 09:05:41.438771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.792 [2024-10-15 09:05:41.558988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.732 09:05:42 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.732 09:05:42 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:24.732 09:05:42 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:24.991 { 00:06:24.991 "version": "SPDK v25.01-pre git sha1 0ea3371f3", 00:06:24.991 "fields": { 00:06:24.991 "major": 25, 00:06:24.991 "minor": 1, 00:06:24.991 "patch": 0, 00:06:24.991 "suffix": "-pre", 00:06:24.991 "commit": "0ea3371f3" 00:06:24.991 } 00:06:24.991 } 00:06:24.991 09:05:42 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:24.991 09:05:42 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:24.991 09:05:42 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:24.991 09:05:42 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:24.991 09:05:42 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:24.991 09:05:42 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:24.991 09:05:42 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:24.991 09:05:42 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.991 09:05:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:24.991 09:05:42 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.991 09:05:42 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:24.991 09:05:42 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:24.991 09:05:42 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:24.991 09:05:42 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:24.991 09:05:42 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:24.991 09:05:42 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:24.991 09:05:42 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.991 09:05:42 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:24.991 09:05:42 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.991 09:05:42 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:24.991 09:05:42 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.991 09:05:42 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:24.991 09:05:42 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:24.991 09:05:42 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:25.251 request: 00:06:25.251 { 00:06:25.251 "method": "env_dpdk_get_mem_stats", 00:06:25.251 "req_id": 1 00:06:25.251 } 00:06:25.251 Got JSON-RPC error response 00:06:25.251 response: 00:06:25.251 { 00:06:25.251 "code": -32601, 00:06:25.251 "message": "Method not found" 00:06:25.251 } 00:06:25.251 09:05:43 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:25.251 09:05:43 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:25.251 09:05:43 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:25.251 09:05:43 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:25.251 09:05:43 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59816 00:06:25.251 09:05:43 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 59816 ']' 00:06:25.251 09:05:43 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 59816 00:06:25.251 09:05:43 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:25.251 09:05:43 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:25.251 09:05:43 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59816 00:06:25.251 09:05:43 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:25.251 09:05:43 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:25.251 killing process with pid 59816 00:06:25.251 09:05:43 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59816' 00:06:25.251 09:05:43 app_cmdline -- common/autotest_common.sh@969 -- # kill 59816 00:06:25.251 09:05:43 app_cmdline -- common/autotest_common.sh@974 -- # wait 59816 00:06:27.788 00:06:27.788 real 0m4.741s 00:06:27.788 user 0m5.113s 00:06:27.788 sys 0m0.634s 00:06:27.788 09:05:45 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.788 09:05:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:28.046 ************************************ 00:06:28.046 END TEST app_cmdline 00:06:28.046 ************************************ 00:06:28.046 09:05:45 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:28.046 09:05:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.046 09:05:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.046 09:05:45 -- common/autotest_common.sh@10 -- # set +x 00:06:28.046 ************************************ 00:06:28.046 START TEST version 00:06:28.046 ************************************ 00:06:28.046 09:05:45 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:28.046 * Looking for test storage... 00:06:28.046 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:28.047 09:05:45 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:28.047 09:05:45 version -- common/autotest_common.sh@1691 -- # lcov --version 00:06:28.047 09:05:45 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:28.305 09:05:45 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:28.305 09:05:45 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.305 09:05:45 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.305 09:05:45 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.305 09:05:45 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.305 09:05:45 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.305 09:05:45 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.305 09:05:45 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.305 09:05:45 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.305 09:05:45 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.305 09:05:45 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.305 09:05:45 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.305 09:05:45 version -- scripts/common.sh@344 -- # case "$op" in 00:06:28.305 09:05:45 version -- scripts/common.sh@345 -- # : 1 00:06:28.305 09:05:45 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.305 09:05:45 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.305 09:05:45 version -- scripts/common.sh@365 -- # decimal 1 00:06:28.305 09:05:45 version -- scripts/common.sh@353 -- # local d=1 00:06:28.305 09:05:45 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.305 09:05:45 version -- scripts/common.sh@355 -- # echo 1 00:06:28.305 09:05:45 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.305 09:05:45 version -- scripts/common.sh@366 -- # decimal 2 00:06:28.305 09:05:45 version -- scripts/common.sh@353 -- # local d=2 00:06:28.305 09:05:45 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.305 09:05:45 version -- scripts/common.sh@355 -- # echo 2 00:06:28.305 09:05:45 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.305 09:05:45 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.305 09:05:45 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.305 09:05:45 version -- scripts/common.sh@368 -- # return 0 00:06:28.305 09:05:45 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.305 09:05:45 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:28.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.305 --rc genhtml_branch_coverage=1 00:06:28.305 --rc genhtml_function_coverage=1 00:06:28.305 --rc genhtml_legend=1 00:06:28.305 --rc geninfo_all_blocks=1 00:06:28.305 --rc geninfo_unexecuted_blocks=1 00:06:28.305 00:06:28.305 ' 00:06:28.305 09:05:45 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:28.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.305 --rc genhtml_branch_coverage=1 00:06:28.305 --rc genhtml_function_coverage=1 00:06:28.305 --rc genhtml_legend=1 00:06:28.305 --rc geninfo_all_blocks=1 00:06:28.305 --rc geninfo_unexecuted_blocks=1 00:06:28.305 00:06:28.305 ' 00:06:28.305 09:05:45 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:28.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.306 --rc genhtml_branch_coverage=1 00:06:28.306 --rc genhtml_function_coverage=1 00:06:28.306 --rc genhtml_legend=1 00:06:28.306 --rc geninfo_all_blocks=1 00:06:28.306 --rc geninfo_unexecuted_blocks=1 00:06:28.306 00:06:28.306 ' 00:06:28.306 09:05:45 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:28.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.306 --rc genhtml_branch_coverage=1 00:06:28.306 --rc genhtml_function_coverage=1 00:06:28.306 --rc genhtml_legend=1 00:06:28.306 --rc geninfo_all_blocks=1 00:06:28.306 --rc geninfo_unexecuted_blocks=1 00:06:28.306 00:06:28.306 ' 00:06:28.306 09:05:45 version -- app/version.sh@17 -- # get_header_version major 00:06:28.306 09:05:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:28.306 09:05:45 version -- app/version.sh@14 -- # cut -f2 00:06:28.306 09:05:45 version -- app/version.sh@14 -- # tr -d '"' 00:06:28.306 09:05:45 version -- app/version.sh@17 -- # major=25 00:06:28.306 09:05:45 version -- app/version.sh@18 -- # get_header_version minor 00:06:28.306 09:05:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:28.306 09:05:45 version -- app/version.sh@14 -- # cut -f2 00:06:28.306 09:05:45 version -- app/version.sh@14 -- # tr -d '"' 00:06:28.306 09:05:45 version -- app/version.sh@18 -- # minor=1 00:06:28.306 09:05:45 version -- app/version.sh@19 -- # get_header_version patch 00:06:28.306 09:05:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:28.306 09:05:45 version -- app/version.sh@14 -- # cut -f2 00:06:28.306 09:05:45 version -- app/version.sh@14 -- # tr -d '"' 00:06:28.306 09:05:45 version -- app/version.sh@19 -- # patch=0 00:06:28.306 09:05:45 version -- app/version.sh@20 -- # get_header_version suffix 00:06:28.306 09:05:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:28.306 09:05:46 version -- app/version.sh@14 -- # cut -f2 00:06:28.306 09:05:46 version -- app/version.sh@14 -- # tr -d '"' 00:06:28.306 09:05:46 version -- app/version.sh@20 -- # suffix=-pre 00:06:28.306 09:05:46 version -- app/version.sh@22 -- # version=25.1 00:06:28.306 09:05:46 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:28.306 09:05:46 version -- app/version.sh@28 -- # version=25.1rc0 00:06:28.306 09:05:46 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:28.306 09:05:46 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:28.306 09:05:46 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:28.306 09:05:46 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:28.306 ************************************ 00:06:28.306 END TEST version 00:06:28.306 ************************************ 00:06:28.306 00:06:28.306 real 0m0.312s 00:06:28.306 user 0m0.183s 00:06:28.306 sys 0m0.186s 00:06:28.306 09:05:46 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.306 09:05:46 version -- common/autotest_common.sh@10 -- # set +x 00:06:28.306 09:05:46 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:28.306 09:05:46 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:28.306 09:05:46 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:28.306 09:05:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.306 09:05:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.306 09:05:46 -- common/autotest_common.sh@10 -- # set +x 00:06:28.306 ************************************ 00:06:28.306 START TEST bdev_raid 00:06:28.306 ************************************ 00:06:28.306 09:05:46 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:28.565 * Looking for test storage... 00:06:28.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:28.565 09:05:46 bdev_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:28.565 09:05:46 bdev_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:06:28.565 09:05:46 bdev_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:28.565 09:05:46 bdev_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:28.565 09:05:46 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.565 09:05:46 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.565 09:05:46 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.565 09:05:46 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.565 09:05:46 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.565 09:05:46 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.565 09:05:46 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.565 09:05:46 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.565 09:05:46 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.565 09:05:46 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.565 09:05:46 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.565 09:05:46 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:28.565 09:05:46 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:28.565 09:05:46 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.565 09:05:46 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.565 09:05:46 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:28.565 09:05:46 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:28.565 09:05:46 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.565 09:05:46 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:28.565 09:05:46 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.565 09:05:46 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:28.565 09:05:46 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:28.565 09:05:46 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.565 09:05:46 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:28.565 09:05:46 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.565 09:05:46 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.565 09:05:46 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.565 09:05:46 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:28.565 09:05:46 bdev_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.565 09:05:46 bdev_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:28.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.565 --rc genhtml_branch_coverage=1 00:06:28.565 --rc genhtml_function_coverage=1 00:06:28.565 --rc genhtml_legend=1 00:06:28.565 --rc geninfo_all_blocks=1 00:06:28.565 --rc geninfo_unexecuted_blocks=1 00:06:28.565 00:06:28.565 ' 00:06:28.565 09:05:46 bdev_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:28.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.565 --rc genhtml_branch_coverage=1 00:06:28.565 --rc genhtml_function_coverage=1 00:06:28.565 --rc genhtml_legend=1 00:06:28.565 --rc geninfo_all_blocks=1 00:06:28.565 --rc geninfo_unexecuted_blocks=1 00:06:28.565 00:06:28.565 ' 00:06:28.565 09:05:46 bdev_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:28.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.565 --rc genhtml_branch_coverage=1 00:06:28.565 --rc genhtml_function_coverage=1 00:06:28.565 --rc genhtml_legend=1 00:06:28.565 --rc geninfo_all_blocks=1 00:06:28.565 --rc geninfo_unexecuted_blocks=1 00:06:28.565 00:06:28.565 ' 00:06:28.565 09:05:46 bdev_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:28.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.565 --rc genhtml_branch_coverage=1 00:06:28.565 --rc genhtml_function_coverage=1 00:06:28.565 --rc genhtml_legend=1 00:06:28.565 --rc geninfo_all_blocks=1 00:06:28.565 --rc geninfo_unexecuted_blocks=1 00:06:28.565 00:06:28.565 ' 00:06:28.565 09:05:46 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:28.565 09:05:46 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:28.565 09:05:46 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:28.565 09:05:46 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:28.565 09:05:46 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:28.565 09:05:46 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:28.565 09:05:46 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:28.565 09:05:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.565 09:05:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.565 09:05:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:28.565 ************************************ 00:06:28.565 START TEST raid1_resize_data_offset_test 00:06:28.565 ************************************ 00:06:28.565 09:05:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:06:28.565 09:05:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60005 00:06:28.565 09:05:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60005' 00:06:28.565 Process raid pid: 60005 00:06:28.565 09:05:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:28.565 09:05:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60005 00:06:28.565 09:05:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 60005 ']' 00:06:28.565 09:05:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.565 09:05:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.565 09:05:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.566 09:05:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.566 09:05:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.824 [2024-10-15 09:05:46.464602] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:06:28.824 [2024-10-15 09:05:46.464835] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:28.824 [2024-10-15 09:05:46.635198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.081 [2024-10-15 09:05:46.759488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.339 [2024-10-15 09:05:46.986162] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:29.340 [2024-10-15 09:05:46.986223] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:29.597 09:05:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.597 09:05:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:06:29.597 09:05:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:29.597 09:05:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.597 09:05:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.597 malloc0 00:06:29.597 09:05:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.597 09:05:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:29.597 09:05:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.597 09:05:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.856 malloc1 00:06:29.856 09:05:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.856 09:05:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:29.856 09:05:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.856 09:05:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.856 null0 00:06:29.856 09:05:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.856 09:05:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:29.856 09:05:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.856 09:05:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.856 [2024-10-15 09:05:47.572897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:29.856 [2024-10-15 09:05:47.575099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:29.856 [2024-10-15 09:05:47.575162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:29.856 [2024-10-15 09:05:47.575339] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:29.856 [2024-10-15 09:05:47.575353] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:29.856 [2024-10-15 09:05:47.575662] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:29.856 [2024-10-15 09:05:47.575899] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:29.856 [2024-10-15 09:05:47.575915] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:29.856 [2024-10-15 09:05:47.576113] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:29.856 09:05:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.856 09:05:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:29.856 09:05:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.856 09:05:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.856 09:05:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:29.856 09:05:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.856 09:05:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:29.856 09:05:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:29.856 09:05:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.856 09:05:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.856 [2024-10-15 09:05:47.636913] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:29.856 09:05:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.856 09:05:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:29.856 09:05:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.856 09:05:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.424 malloc2 00:06:30.424 09:05:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.424 09:05:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:30.424 09:05:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.424 09:05:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.424 [2024-10-15 09:05:48.232086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:30.424 [2024-10-15 09:05:48.253266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:30.424 09:05:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.424 [2024-10-15 09:05:48.255414] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:30.424 09:05:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:30.424 09:05:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.424 09:05:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:30.424 09:05:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.424 09:05:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.424 09:05:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:30.424 09:05:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60005 00:06:30.424 09:05:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 60005 ']' 00:06:30.424 09:05:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 60005 00:06:30.424 09:05:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:06:30.424 09:05:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:30.424 09:05:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60005 00:06:30.683 killing process with pid 60005 00:06:30.683 09:05:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:30.683 09:05:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:30.683 09:05:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60005' 00:06:30.683 09:05:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 60005 00:06:30.683 [2024-10-15 09:05:48.349774] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:30.683 09:05:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 60005 00:06:30.683 [2024-10-15 09:05:48.350960] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:30.683 [2024-10-15 09:05:48.351021] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:30.683 [2024-10-15 09:05:48.351041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:30.683 [2024-10-15 09:05:48.392580] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:30.683 [2024-10-15 09:05:48.392956] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:30.683 [2024-10-15 09:05:48.392997] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:33.236 [2024-10-15 09:05:50.543544] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:34.175 ************************************ 00:06:34.175 END TEST raid1_resize_data_offset_test 00:06:34.175 ************************************ 00:06:34.175 09:05:51 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:34.175 00:06:34.175 real 0m5.470s 00:06:34.175 user 0m5.446s 00:06:34.175 sys 0m0.545s 00:06:34.175 09:05:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.175 09:05:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.175 09:05:51 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:34.175 09:05:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:34.175 09:05:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.175 09:05:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:34.175 ************************************ 00:06:34.175 START TEST raid0_resize_superblock_test 00:06:34.175 ************************************ 00:06:34.175 09:05:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:06:34.175 09:05:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:34.175 09:05:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60099 00:06:34.175 Process raid pid: 60099 00:06:34.175 09:05:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60099' 00:06:34.175 09:05:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60099 00:06:34.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.175 09:05:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 60099 ']' 00:06:34.175 09:05:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.175 09:05:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.175 09:05:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.175 09:05:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.175 09:05:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.175 09:05:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:34.175 [2024-10-15 09:05:51.988196] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:06:34.175 [2024-10-15 09:05:51.988341] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:34.435 [2024-10-15 09:05:52.161963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.435 [2024-10-15 09:05:52.296355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.694 [2024-10-15 09:05:52.557165] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:34.694 [2024-10-15 09:05:52.557226] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:35.262 09:05:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.262 09:05:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:35.262 09:05:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:35.262 09:05:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.262 09:05:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.831 malloc0 00:06:35.831 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.831 09:05:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:35.831 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.831 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.831 [2024-10-15 09:05:53.609594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:35.831 [2024-10-15 09:05:53.609720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:35.831 [2024-10-15 09:05:53.609750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:35.831 [2024-10-15 09:05:53.609765] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:35.831 [2024-10-15 09:05:53.612341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:35.831 [2024-10-15 09:05:53.612400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:35.831 pt0 00:06:35.831 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.831 09:05:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:35.831 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.831 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.831 892b673c-8fe3-44da-8112-065bc6dc45ff 00:06:35.831 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.831 09:05:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:35.831 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.831 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.090 49da692e-1572-4a0b-a1ea-c7e971713b60 00:06:36.090 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.090 09:05:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:36.090 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.090 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.090 4d081c0f-7090-4308-86c3-f4f7020d99ef 00:06:36.090 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.090 09:05:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:36.090 09:05:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:36.090 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.090 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.090 [2024-10-15 09:05:53.750574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 49da692e-1572-4a0b-a1ea-c7e971713b60 is claimed 00:06:36.090 [2024-10-15 09:05:53.750784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4d081c0f-7090-4308-86c3-f4f7020d99ef is claimed 00:06:36.090 [2024-10-15 09:05:53.750967] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:36.090 [2024-10-15 09:05:53.750992] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:36.090 [2024-10-15 09:05:53.751352] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:36.090 [2024-10-15 09:05:53.751593] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:36.090 [2024-10-15 09:05:53.751607] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:36.090 [2024-10-15 09:05:53.751831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:36.090 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.090 09:05:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:36.090 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.090 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.090 09:05:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:36.090 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.090 09:05:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:36.090 09:05:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:36.090 09:05:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:36.090 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.090 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.090 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.090 09:05:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:36.090 09:05:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:36.090 09:05:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:36.090 09:05:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:36.090 09:05:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:36.090 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.090 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.090 [2024-10-15 09:05:53.846743] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:36.090 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.090 09:05:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:36.090 09:05:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:36.090 09:05:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:36.091 09:05:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:36.091 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.091 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.091 [2024-10-15 09:05:53.894618] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:36.091 [2024-10-15 09:05:53.894774] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '49da692e-1572-4a0b-a1ea-c7e971713b60' was resized: old size 131072, new size 204800 00:06:36.091 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.091 09:05:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:36.091 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.091 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.091 [2024-10-15 09:05:53.906510] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:36.091 [2024-10-15 09:05:53.906615] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '4d081c0f-7090-4308-86c3-f4f7020d99ef' was resized: old size 131072, new size 204800 00:06:36.091 [2024-10-15 09:05:53.906660] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:36.091 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.091 09:05:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:36.091 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.091 09:05:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:36.091 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.091 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.091 09:05:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:36.091 09:05:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:36.091 09:05:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:36.091 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.091 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.350 09:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.350 [2024-10-15 09:05:54.014410] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.350 [2024-10-15 09:05:54.058065] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:36.350 [2024-10-15 09:05:54.058240] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:36.350 [2024-10-15 09:05:54.058286] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:36.350 [2024-10-15 09:05:54.058343] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:36.350 [2024-10-15 09:05:54.058521] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:36.350 [2024-10-15 09:05:54.058601] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:36.350 [2024-10-15 09:05:54.058663] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.350 [2024-10-15 09:05:54.065944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:36.350 [2024-10-15 09:05:54.066121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:36.350 [2024-10-15 09:05:54.066194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:36.350 [2024-10-15 09:05:54.066253] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:36.350 [2024-10-15 09:05:54.069351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:36.350 [2024-10-15 09:05:54.069489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:36.350 pt0 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:36.350 [2024-10-15 09:05:54.072007] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 49da692e-1572-4a0b- 09:05:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.350 a1ea-c7e971713b60 00:06:36.350 [2024-10-15 09:05:54.072192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 49da692e-1572-4a0b-a1ea-c7e971713b60 is claimed 00:06:36.350 [2024-10-15 09:05:54.072388] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 4d081c0f-7090-4308-86c3-f4f7020d99ef 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.350 [2024-10-15 09:05:54.072462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4d081c0f-7090-4308-86c3-f4f7020d99ef is claimed 00:06:36.350 [2024-10-15 09:05:54.072650] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 4d081c0f-7090-4308-86c3-f4f7020d99ef (2) smaller than existing raid bdev Raid (3) 00:06:36.350 [2024-10-15 09:05:54.072759] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 49da692e-1572-4a0b-a1ea-c7e971713b60: File exists 00:06:36.350 [2024-10-15 09:05:54.072863] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:36.350 [2024-10-15 09:05:54.072904] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:36.350 [2024-10-15 09:05:54.073230] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:36.350 [2024-10-15 09:05:54.073463] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:36.350 [2024-10-15 09:05:54.073518] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:36.350 [2024-10-15 09:05:54.073868] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.350 [2024-10-15 09:05:54.090306] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60099 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 60099 ']' 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 60099 00:06:36.350 09:05:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:36.351 09:05:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:36.351 09:05:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60099 00:06:36.351 killing process with pid 60099 00:06:36.351 09:05:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:36.351 09:05:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:36.351 09:05:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60099' 00:06:36.351 09:05:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 60099 00:06:36.351 [2024-10-15 09:05:54.147402] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:36.351 [2024-10-15 09:05:54.147509] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:36.351 09:05:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 60099 00:06:36.351 [2024-10-15 09:05:54.147567] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:36.351 [2024-10-15 09:05:54.147579] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:38.302 [2024-10-15 09:05:55.897200] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:39.677 ************************************ 00:06:39.677 END TEST raid0_resize_superblock_test 00:06:39.677 ************************************ 00:06:39.677 09:05:57 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:39.677 00:06:39.677 real 0m5.340s 00:06:39.677 user 0m5.622s 00:06:39.677 sys 0m0.587s 00:06:39.677 09:05:57 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.677 09:05:57 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.677 09:05:57 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:39.677 09:05:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:39.677 09:05:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.677 09:05:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:39.677 ************************************ 00:06:39.677 START TEST raid1_resize_superblock_test 00:06:39.677 ************************************ 00:06:39.677 Process raid pid: 60209 00:06:39.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.677 09:05:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:06:39.677 09:05:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:39.677 09:05:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60209 00:06:39.677 09:05:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60209' 00:06:39.677 09:05:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60209 00:06:39.677 09:05:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:39.677 09:05:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 60209 ']' 00:06:39.677 09:05:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.677 09:05:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.677 09:05:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.677 09:05:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.677 09:05:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.677 [2024-10-15 09:05:57.386905] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:06:39.677 [2024-10-15 09:05:57.387137] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:39.935 [2024-10-15 09:05:57.576877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.935 [2024-10-15 09:05:57.721150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.194 [2024-10-15 09:05:57.981121] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:40.194 [2024-10-15 09:05:57.981185] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:40.812 09:05:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.812 09:05:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:40.812 09:05:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:40.812 09:05:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.812 09:05:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.379 malloc0 00:06:41.379 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.379 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:41.379 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.379 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.379 [2024-10-15 09:05:59.128398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:41.379 [2024-10-15 09:05:59.128608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:41.379 [2024-10-15 09:05:59.128669] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:41.379 [2024-10-15 09:05:59.128733] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:41.379 [2024-10-15 09:05:59.131556] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:41.379 [2024-10-15 09:05:59.131709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:41.379 pt0 00:06:41.379 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.379 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:41.379 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.379 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.379 ceb7433c-451c-4898-89da-2684c6fdd740 00:06:41.379 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.379 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:41.379 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.379 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.379 f7593b72-1c82-445b-87fa-f81e00b13f97 00:06:41.379 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.379 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:41.379 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.379 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.379 2b2fda58-cadd-4b5b-95c2-331fecacba5a 00:06:41.379 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.379 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:41.379 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:41.379 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.379 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.379 [2024-10-15 09:05:59.260014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev f7593b72-1c82-445b-87fa-f81e00b13f97 is claimed 00:06:41.379 [2024-10-15 09:05:59.260206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2b2fda58-cadd-4b5b-95c2-331fecacba5a is claimed 00:06:41.379 [2024-10-15 09:05:59.260398] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:41.379 [2024-10-15 09:05:59.260418] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:41.379 [2024-10-15 09:05:59.260833] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:41.379 [2024-10-15 09:05:59.261106] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:41.379 [2024-10-15 09:05:59.261129] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:41.379 [2024-10-15 09:05:59.261362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:41.379 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.379 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:41.379 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.379 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.379 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:41.637 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.637 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:41.637 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:41.637 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.637 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.637 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:41.637 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.637 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:41.637 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:41.637 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:41.637 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:41.637 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:41.637 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.637 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.637 [2024-10-15 09:05:59.376217] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:41.637 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.637 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:41.637 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:41.638 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:41.638 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:41.638 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.638 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.638 [2024-10-15 09:05:59.424154] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:41.638 [2024-10-15 09:05:59.424221] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'f7593b72-1c82-445b-87fa-f81e00b13f97' was resized: old size 131072, new size 204800 00:06:41.638 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.638 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:41.638 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.638 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.638 [2024-10-15 09:05:59.432187] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:41.638 [2024-10-15 09:05:59.432247] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '2b2fda58-cadd-4b5b-95c2-331fecacba5a' was resized: old size 131072, new size 204800 00:06:41.638 [2024-10-15 09:05:59.432309] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:41.638 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.638 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:41.638 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.638 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:41.638 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.638 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.638 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:41.638 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:41.638 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:41.638 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.638 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.638 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.896 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:41.896 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:41.896 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:41.896 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:41.896 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:41.896 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.896 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.896 [2024-10-15 09:05:59.547884] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:41.896 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.896 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:41.896 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:41.896 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:41.896 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:41.896 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.896 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.896 [2024-10-15 09:05:59.591524] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:41.896 [2024-10-15 09:05:59.591737] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:41.896 [2024-10-15 09:05:59.591803] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:41.896 [2024-10-15 09:05:59.592043] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:41.896 [2024-10-15 09:05:59.592315] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:41.896 [2024-10-15 09:05:59.592405] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:41.896 [2024-10-15 09:05:59.592422] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:41.896 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.896 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:41.897 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.897 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.897 [2024-10-15 09:05:59.603451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:41.897 [2024-10-15 09:05:59.603604] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:41.897 [2024-10-15 09:05:59.603650] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:41.897 [2024-10-15 09:05:59.603671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:41.897 [2024-10-15 09:05:59.606719] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:41.897 [2024-10-15 09:05:59.606796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:41.897 pt0 00:06:41.897 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.897 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:41.897 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.897 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.897 [2024-10-15 09:05:59.609120] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev f7593b72-1c82-445b-87fa-f81e00b13f97 00:06:41.897 [2024-10-15 09:05:59.609227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev f7593b72-1c82-445b-87fa-f81e00b13f97 is claimed 00:06:41.897 [2024-10-15 09:05:59.609375] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 2b2fda58-cadd-4b5b-95c2-331fecacba5a 00:06:41.897 [2024-10-15 09:05:59.609399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2b2fda58-cadd-4b5b-95c2-331fecacba5a is claimed 00:06:41.897 [2024-10-15 09:05:59.609558] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 2b2fda58-cadd-4b5b-95c2-331fecacba5a (2) smaller than existing raid bdev Raid (3) 00:06:41.897 [2024-10-15 09:05:59.609583] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev f7593b72-1c82-445b-87fa-f81e00b13f97: File exists 00:06:41.897 [2024-10-15 09:05:59.609654] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:41.897 [2024-10-15 09:05:59.609669] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:41.897 [2024-10-15 09:05:59.609985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:41.897 [2024-10-15 09:05:59.610260] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:41.897 [2024-10-15 09:05:59.610279] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:41.897 [2024-10-15 09:05:59.610479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:41.897 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.897 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:41.897 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:41.897 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:41.897 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:41.897 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.897 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.897 [2024-10-15 09:05:59.623770] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:41.897 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.897 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:41.897 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:41.897 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:41.897 09:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60209 00:06:41.897 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 60209 ']' 00:06:41.897 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 60209 00:06:41.897 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:41.897 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:41.897 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60209 00:06:41.897 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:41.897 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:41.897 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60209' 00:06:41.897 killing process with pid 60209 00:06:41.897 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 60209 00:06:41.897 [2024-10-15 09:05:59.699003] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:41.897 09:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 60209 00:06:41.897 [2024-10-15 09:05:59.699213] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:41.897 [2024-10-15 09:05:59.699323] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:41.897 [2024-10-15 09:05:59.699379] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:43.804 [2024-10-15 09:06:01.465220] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:45.183 09:06:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:45.183 00:06:45.183 real 0m5.487s 00:06:45.183 user 0m5.878s 00:06:45.183 sys 0m0.610s 00:06:45.183 09:06:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.183 09:06:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.183 ************************************ 00:06:45.183 END TEST raid1_resize_superblock_test 00:06:45.183 ************************************ 00:06:45.183 09:06:02 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:45.183 09:06:02 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:45.183 09:06:02 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:45.183 09:06:02 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:45.183 09:06:02 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:45.183 09:06:02 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:45.183 09:06:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:45.183 09:06:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.183 09:06:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:45.183 ************************************ 00:06:45.183 START TEST raid_function_test_raid0 00:06:45.183 ************************************ 00:06:45.183 09:06:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:06:45.184 09:06:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:45.184 09:06:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:45.184 09:06:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:45.184 09:06:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60317 00:06:45.184 Process raid pid: 60317 00:06:45.184 09:06:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60317' 00:06:45.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.184 09:06:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60317 00:06:45.184 09:06:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 60317 ']' 00:06:45.184 09:06:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.184 09:06:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.184 09:06:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:45.184 09:06:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.184 09:06:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.184 09:06:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:45.184 [2024-10-15 09:06:02.918351] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:06:45.184 [2024-10-15 09:06:02.918596] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:45.442 [2024-10-15 09:06:03.088954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.442 [2024-10-15 09:06:03.227138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.700 [2024-10-15 09:06:03.467590] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:45.700 [2024-10-15 09:06:03.467773] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:45.958 09:06:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.958 09:06:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:06:45.958 09:06:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:45.958 09:06:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.958 09:06:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:46.216 Base_1 00:06:46.216 09:06:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.216 09:06:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:46.216 09:06:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.216 09:06:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:46.216 Base_2 00:06:46.216 09:06:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.216 09:06:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:46.216 09:06:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.216 09:06:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:46.216 [2024-10-15 09:06:03.954341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:46.216 [2024-10-15 09:06:03.956629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:46.216 [2024-10-15 09:06:03.956770] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:46.216 [2024-10-15 09:06:03.956786] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:46.216 [2024-10-15 09:06:03.957151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:46.216 [2024-10-15 09:06:03.957346] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:46.216 [2024-10-15 09:06:03.957358] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:46.216 [2024-10-15 09:06:03.957563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:46.216 09:06:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.216 09:06:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:46.216 09:06:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:46.216 09:06:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.216 09:06:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:46.216 09:06:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.216 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:46.216 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:46.216 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:46.216 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:46.216 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:46.216 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:46.216 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:46.216 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:46.216 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:46.216 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:46.216 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:46.216 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:46.476 [2024-10-15 09:06:04.281932] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:46.476 /dev/nbd0 00:06:46.476 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:46.476 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:46.476 09:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:46.476 09:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:06:46.476 09:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:46.476 09:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:46.476 09:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:46.476 09:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:06:46.476 09:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:46.476 09:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:46.476 09:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:46.476 1+0 records in 00:06:46.476 1+0 records out 00:06:46.476 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398605 s, 10.3 MB/s 00:06:46.476 09:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.476 09:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:06:46.476 09:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.476 09:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:46.476 09:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:06:46.476 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.476 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:46.476 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:46.476 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:46.476 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:46.734 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:46.734 { 00:06:46.734 "nbd_device": "/dev/nbd0", 00:06:46.734 "bdev_name": "raid" 00:06:46.734 } 00:06:46.734 ]' 00:06:46.734 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:46.734 { 00:06:46.734 "nbd_device": "/dev/nbd0", 00:06:46.734 "bdev_name": "raid" 00:06:46.734 } 00:06:46.734 ]' 00:06:46.734 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.992 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:46.992 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.992 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:46.992 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:46.992 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:46.992 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:46.992 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:46.992 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:46.992 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:46.992 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:46.992 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:46.992 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:46.992 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:46.992 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:46.992 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:46.992 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:46.992 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:46.992 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:46.992 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:46.992 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:46.992 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:46.992 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:46.992 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:46.992 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:46.992 4096+0 records in 00:06:46.992 4096+0 records out 00:06:46.992 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0211081 s, 99.4 MB/s 00:06:46.992 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:47.252 4096+0 records in 00:06:47.252 4096+0 records out 00:06:47.252 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.274982 s, 7.6 MB/s 00:06:47.252 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:47.252 09:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:47.252 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:47.252 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:47.252 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:47.252 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:47.252 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:47.252 128+0 records in 00:06:47.252 128+0 records out 00:06:47.252 65536 bytes (66 kB, 64 KiB) copied, 0.00173727 s, 37.7 MB/s 00:06:47.252 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:47.252 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:47.252 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:47.252 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:47.252 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:47.252 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:47.252 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:47.252 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:47.252 2035+0 records in 00:06:47.252 2035+0 records out 00:06:47.252 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00663803 s, 157 MB/s 00:06:47.252 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:47.252 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:47.252 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:47.252 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:47.252 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:47.252 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:47.252 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:47.252 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:47.252 456+0 records in 00:06:47.252 456+0 records out 00:06:47.252 233472 bytes (233 kB, 228 KiB) copied, 0.00319736 s, 73.0 MB/s 00:06:47.252 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:47.252 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:47.252 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:47.252 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:47.252 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:47.252 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:47.252 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:47.252 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:47.252 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:47.252 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:47.252 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:47.252 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.252 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:47.511 [2024-10-15 09:06:05.372465] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:47.511 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:47.511 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:47.511 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:47.511 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.511 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.511 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:47.511 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:47.511 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.511 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:47.511 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:47.511 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:47.769 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:47.769 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.769 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:48.065 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:48.065 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:48.065 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:48.065 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:48.065 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:48.065 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:48.065 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:48.065 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:48.065 09:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60317 00:06:48.065 09:06:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 60317 ']' 00:06:48.065 09:06:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 60317 00:06:48.065 09:06:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:06:48.065 09:06:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:48.065 09:06:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60317 00:06:48.065 09:06:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:48.065 09:06:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:48.065 killing process with pid 60317 00:06:48.065 09:06:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60317' 00:06:48.065 09:06:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 60317 00:06:48.065 09:06:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 60317 00:06:48.065 [2024-10-15 09:06:05.736671] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:48.065 [2024-10-15 09:06:05.736815] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:48.065 [2024-10-15 09:06:05.736879] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:48.065 [2024-10-15 09:06:05.736894] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:48.324 [2024-10-15 09:06:05.996932] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:49.698 09:06:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:49.698 00:06:49.698 real 0m4.530s 00:06:49.698 user 0m5.462s 00:06:49.698 sys 0m0.950s 00:06:49.698 09:06:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.698 09:06:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:49.698 ************************************ 00:06:49.698 END TEST raid_function_test_raid0 00:06:49.698 ************************************ 00:06:49.698 09:06:07 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:49.698 09:06:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:49.698 09:06:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.698 09:06:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:49.698 ************************************ 00:06:49.698 START TEST raid_function_test_concat 00:06:49.698 ************************************ 00:06:49.698 09:06:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:06:49.698 09:06:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:49.698 09:06:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:49.698 09:06:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:49.698 09:06:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60452 00:06:49.698 09:06:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:49.698 09:06:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60452' 00:06:49.698 Process raid pid: 60452 00:06:49.698 09:06:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60452 00:06:49.698 09:06:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 60452 ']' 00:06:49.698 09:06:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.698 09:06:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.698 09:06:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.698 09:06:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.698 09:06:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:49.698 [2024-10-15 09:06:07.507536] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:06:49.698 [2024-10-15 09:06:07.507761] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:49.958 [2024-10-15 09:06:07.683974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.958 [2024-10-15 09:06:07.834234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.217 [2024-10-15 09:06:08.089888] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:50.217 [2024-10-15 09:06:08.089949] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:50.784 09:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.784 09:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:06:50.784 09:06:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:50.784 09:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.784 09:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:50.784 Base_1 00:06:50.784 09:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.784 09:06:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:50.784 09:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.784 09:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:50.784 Base_2 00:06:50.784 09:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.784 09:06:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:06:50.784 09:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.784 09:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:50.784 [2024-10-15 09:06:08.609271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:50.784 [2024-10-15 09:06:08.611636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:50.784 [2024-10-15 09:06:08.611793] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:50.784 [2024-10-15 09:06:08.611809] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:50.784 [2024-10-15 09:06:08.612176] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:50.784 [2024-10-15 09:06:08.612383] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:50.784 [2024-10-15 09:06:08.612402] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:50.784 [2024-10-15 09:06:08.612639] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:50.784 09:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.784 09:06:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:50.784 09:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.784 09:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:50.784 09:06:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:50.784 09:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.784 09:06:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:50.784 09:06:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:50.784 09:06:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:50.784 09:06:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:50.784 09:06:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:50.784 09:06:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:50.784 09:06:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:50.784 09:06:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:50.784 09:06:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:06:50.784 09:06:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:50.784 09:06:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:50.784 09:06:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:51.042 [2024-10-15 09:06:08.880859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:51.042 /dev/nbd0 00:06:51.042 09:06:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:51.042 09:06:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:51.042 09:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:51.042 09:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:06:51.042 09:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:51.042 09:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:51.042 09:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:51.042 09:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:06:51.042 09:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:51.042 09:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:51.042 09:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:51.042 1+0 records in 00:06:51.042 1+0 records out 00:06:51.042 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332442 s, 12.3 MB/s 00:06:51.042 09:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.042 09:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:06:51.042 09:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.301 09:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:51.301 09:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:06:51.301 09:06:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.301 09:06:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:51.301 09:06:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:51.301 09:06:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:51.301 09:06:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:51.301 09:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:51.301 { 00:06:51.301 "nbd_device": "/dev/nbd0", 00:06:51.301 "bdev_name": "raid" 00:06:51.301 } 00:06:51.301 ]' 00:06:51.301 09:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:51.301 { 00:06:51.301 "nbd_device": "/dev/nbd0", 00:06:51.301 "bdev_name": "raid" 00:06:51.301 } 00:06:51.301 ]' 00:06:51.560 09:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.560 09:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:51.560 09:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.560 09:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:51.560 09:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:06:51.560 09:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:06:51.560 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:06:51.560 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:51.560 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:51.560 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:51.560 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:51.560 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:51.560 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:51.560 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:51.560 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:51.560 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:51.560 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:51.560 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:51.560 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:51.560 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:51.560 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:51.560 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:51.560 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:51.560 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:51.560 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:51.560 4096+0 records in 00:06:51.560 4096+0 records out 00:06:51.560 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0339553 s, 61.8 MB/s 00:06:51.560 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:51.820 4096+0 records in 00:06:51.820 4096+0 records out 00:06:51.820 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.257354 s, 8.1 MB/s 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:51.820 128+0 records in 00:06:51.820 128+0 records out 00:06:51.820 65536 bytes (66 kB, 64 KiB) copied, 0.00129532 s, 50.6 MB/s 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:51.820 2035+0 records in 00:06:51.820 2035+0 records out 00:06:51.820 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0131654 s, 79.1 MB/s 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:51.820 456+0 records in 00:06:51.820 456+0 records out 00:06:51.820 233472 bytes (233 kB, 228 KiB) copied, 0.00384118 s, 60.8 MB/s 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.820 09:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:52.079 [2024-10-15 09:06:09.952835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:52.079 09:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:52.079 09:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:52.079 09:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:52.079 09:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.079 09:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.079 09:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:52.079 09:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:06:52.079 09:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.079 09:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:52.079 09:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:52.079 09:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:52.336 09:06:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:52.336 09:06:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:52.336 09:06:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.595 09:06:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:52.595 09:06:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:52.595 09:06:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.595 09:06:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:06:52.595 09:06:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:06:52.595 09:06:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:52.595 09:06:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:06:52.595 09:06:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:52.595 09:06:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60452 00:06:52.595 09:06:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 60452 ']' 00:06:52.595 09:06:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 60452 00:06:52.595 09:06:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:06:52.595 09:06:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:52.595 09:06:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60452 00:06:52.595 killing process with pid 60452 00:06:52.595 09:06:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:52.595 09:06:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:52.595 09:06:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60452' 00:06:52.595 09:06:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 60452 00:06:52.595 [2024-10-15 09:06:10.316639] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:52.595 09:06:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 60452 00:06:52.595 [2024-10-15 09:06:10.316774] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:52.595 [2024-10-15 09:06:10.316836] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:52.595 [2024-10-15 09:06:10.316860] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:52.853 [2024-10-15 09:06:10.568051] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:54.271 ************************************ 00:06:54.271 END TEST raid_function_test_concat 00:06:54.271 ************************************ 00:06:54.271 09:06:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:06:54.271 00:06:54.271 real 0m4.501s 00:06:54.271 user 0m5.301s 00:06:54.271 sys 0m1.094s 00:06:54.271 09:06:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.271 09:06:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:54.271 09:06:11 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:06:54.271 09:06:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:54.271 09:06:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.271 09:06:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:54.271 ************************************ 00:06:54.271 START TEST raid0_resize_test 00:06:54.271 ************************************ 00:06:54.271 09:06:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:06:54.271 09:06:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:06:54.271 09:06:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:54.271 09:06:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:54.271 09:06:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:54.271 09:06:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:54.271 09:06:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:54.271 09:06:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:54.271 09:06:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:54.272 09:06:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:54.272 09:06:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60581 00:06:54.272 09:06:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60581' 00:06:54.272 Process raid pid: 60581 00:06:54.272 09:06:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60581 00:06:54.272 09:06:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 60581 ']' 00:06:54.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.272 09:06:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.272 09:06:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.272 09:06:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.272 09:06:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.272 09:06:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.272 [2024-10-15 09:06:12.065960] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:06:54.272 [2024-10-15 09:06:12.066173] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:54.540 [2024-10-15 09:06:12.233552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.540 [2024-10-15 09:06:12.397063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.798 [2024-10-15 09:06:12.647117] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:54.798 [2024-10-15 09:06:12.647178] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:55.366 09:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.366 09:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:06:55.366 09:06:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:55.366 09:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.366 09:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.366 Base_1 00:06:55.366 09:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.366 09:06:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:55.366 09:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.366 09:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.366 Base_2 00:06:55.366 09:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.366 09:06:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:06:55.366 09:06:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:55.366 09:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.366 09:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.366 [2024-10-15 09:06:13.047911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:55.366 [2024-10-15 09:06:13.050051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:55.366 [2024-10-15 09:06:13.050136] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:55.366 [2024-10-15 09:06:13.050150] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:55.366 [2024-10-15 09:06:13.050479] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:55.367 [2024-10-15 09:06:13.050624] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:55.367 [2024-10-15 09:06:13.050635] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:55.367 [2024-10-15 09:06:13.050858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.367 [2024-10-15 09:06:13.059853] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:55.367 [2024-10-15 09:06:13.059898] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:55.367 true 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:55.367 [2024-10-15 09:06:13.072010] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.367 [2024-10-15 09:06:13.123744] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:55.367 [2024-10-15 09:06:13.123790] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:55.367 [2024-10-15 09:06:13.123826] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:55.367 true 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.367 [2024-10-15 09:06:13.135933] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60581 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 60581 ']' 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 60581 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60581 00:06:55.367 killing process with pid 60581 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60581' 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 60581 00:06:55.367 [2024-10-15 09:06:13.224190] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:55.367 [2024-10-15 09:06:13.224305] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:55.367 [2024-10-15 09:06:13.224363] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:55.367 [2024-10-15 09:06:13.224375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:55.367 09:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 60581 00:06:55.367 [2024-10-15 09:06:13.244726] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:56.747 ************************************ 00:06:56.747 END TEST raid0_resize_test 00:06:56.747 ************************************ 00:06:56.747 09:06:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:56.747 00:06:56.747 real 0m2.543s 00:06:56.747 user 0m2.765s 00:06:56.747 sys 0m0.384s 00:06:56.747 09:06:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.747 09:06:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.747 09:06:14 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:06:56.747 09:06:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:56.747 09:06:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.747 09:06:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:56.747 ************************************ 00:06:56.747 START TEST raid1_resize_test 00:06:56.747 ************************************ 00:06:56.747 09:06:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:06:56.747 09:06:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:06:56.747 09:06:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:56.747 09:06:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:56.747 09:06:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:56.747 09:06:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:56.747 09:06:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:56.747 09:06:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:56.747 09:06:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:56.747 09:06:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60648 00:06:56.747 Process raid pid: 60648 00:06:56.747 09:06:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60648' 00:06:56.747 09:06:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60648 00:06:56.747 09:06:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 60648 ']' 00:06:56.747 09:06:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.747 09:06:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:56.747 09:06:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.747 09:06:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.747 09:06:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.747 09:06:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.005 [2024-10-15 09:06:14.652057] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:06:57.005 [2024-10-15 09:06:14.652228] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:57.005 [2024-10-15 09:06:14.807226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.263 [2024-10-15 09:06:14.947815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.523 [2024-10-15 09:06:15.198894] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:57.523 [2024-10-15 09:06:15.198956] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:57.783 09:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.783 09:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:06:57.783 09:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:57.783 09:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.783 09:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.783 Base_1 00:06:57.783 09:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.783 09:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:57.783 09:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.783 09:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.783 Base_2 00:06:57.783 09:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.783 09:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:06:57.783 09:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:57.783 09:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.783 09:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.783 [2024-10-15 09:06:15.607874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:57.783 [2024-10-15 09:06:15.610139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:57.783 [2024-10-15 09:06:15.610242] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:57.783 [2024-10-15 09:06:15.610256] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:06:57.783 [2024-10-15 09:06:15.610598] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:57.783 [2024-10-15 09:06:15.610795] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:57.784 [2024-10-15 09:06:15.610822] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:57.784 [2024-10-15 09:06:15.611036] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:57.784 09:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.784 09:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:57.784 09:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.784 09:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.784 [2024-10-15 09:06:15.615788] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:57.784 [2024-10-15 09:06:15.615828] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:57.784 true 00:06:57.784 09:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.784 09:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:57.784 09:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:57.784 09:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.784 09:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.784 [2024-10-15 09:06:15.631965] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:57.784 09:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.784 09:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:06:57.784 09:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:06:57.784 09:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:06:57.784 09:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:06:57.784 09:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:06:57.784 09:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:57.784 09:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.784 09:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.784 [2024-10-15 09:06:15.663742] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:57.784 [2024-10-15 09:06:15.663780] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:57.784 [2024-10-15 09:06:15.663821] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:06:57.784 true 00:06:57.784 09:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.784 09:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:57.784 09:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.784 09:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.784 09:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:57.784 [2024-10-15 09:06:15.675918] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:58.043 09:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.043 09:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:06:58.043 09:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:06:58.043 09:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:06:58.043 09:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:06:58.043 09:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:06:58.043 09:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60648 00:06:58.043 09:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 60648 ']' 00:06:58.043 09:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 60648 00:06:58.043 09:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:06:58.043 09:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:58.043 09:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60648 00:06:58.043 killing process with pid 60648 00:06:58.043 09:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:58.043 09:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:58.043 09:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60648' 00:06:58.043 09:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 60648 00:06:58.043 09:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 60648 00:06:58.043 [2024-10-15 09:06:15.759570] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:58.043 [2024-10-15 09:06:15.759697] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:58.043 [2024-10-15 09:06:15.760267] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:58.043 [2024-10-15 09:06:15.760292] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:58.043 [2024-10-15 09:06:15.780959] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:59.421 09:06:17 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:59.421 00:06:59.421 real 0m2.506s 00:06:59.421 user 0m2.705s 00:06:59.421 sys 0m0.358s 00:06:59.421 09:06:17 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.421 ************************************ 00:06:59.421 END TEST raid1_resize_test 00:06:59.421 ************************************ 00:06:59.421 09:06:17 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.421 09:06:17 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:06:59.421 09:06:17 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:59.421 09:06:17 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:59.421 09:06:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:59.421 09:06:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.421 09:06:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:59.421 ************************************ 00:06:59.421 START TEST raid_state_function_test 00:06:59.421 ************************************ 00:06:59.421 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:06:59.421 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:59.421 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:59.421 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:59.421 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:59.421 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:59.421 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:59.421 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:59.421 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:59.421 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:59.421 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:59.421 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:59.421 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:59.421 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:59.421 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:59.421 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:59.421 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:59.421 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:59.421 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:59.421 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:59.421 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:59.421 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:59.421 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:59.421 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:59.421 Process raid pid: 60705 00:06:59.421 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60705 00:06:59.421 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:59.421 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60705' 00:06:59.421 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60705 00:06:59.421 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 60705 ']' 00:06:59.421 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.421 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:59.421 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.421 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:59.421 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.421 [2024-10-15 09:06:17.226463] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:06:59.421 [2024-10-15 09:06:17.226663] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:59.681 [2024-10-15 09:06:17.416359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.941 [2024-10-15 09:06:17.580920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.941 [2024-10-15 09:06:17.835388] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:59.941 [2024-10-15 09:06:17.835558] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:00.515 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.515 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:00.515 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:00.515 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.515 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.515 [2024-10-15 09:06:18.148115] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:00.515 [2024-10-15 09:06:18.148189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:00.515 [2024-10-15 09:06:18.148202] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:00.515 [2024-10-15 09:06:18.148213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:00.515 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.515 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:00.515 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:00.515 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:00.515 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:00.515 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:00.515 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:00.515 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:00.515 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:00.515 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:00.515 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:00.515 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.515 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.515 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.515 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:00.515 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.515 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:00.515 "name": "Existed_Raid", 00:07:00.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:00.515 "strip_size_kb": 64, 00:07:00.515 "state": "configuring", 00:07:00.515 "raid_level": "raid0", 00:07:00.515 "superblock": false, 00:07:00.515 "num_base_bdevs": 2, 00:07:00.515 "num_base_bdevs_discovered": 0, 00:07:00.515 "num_base_bdevs_operational": 2, 00:07:00.515 "base_bdevs_list": [ 00:07:00.515 { 00:07:00.515 "name": "BaseBdev1", 00:07:00.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:00.515 "is_configured": false, 00:07:00.515 "data_offset": 0, 00:07:00.515 "data_size": 0 00:07:00.515 }, 00:07:00.515 { 00:07:00.515 "name": "BaseBdev2", 00:07:00.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:00.515 "is_configured": false, 00:07:00.515 "data_offset": 0, 00:07:00.515 "data_size": 0 00:07:00.515 } 00:07:00.515 ] 00:07:00.515 }' 00:07:00.515 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:00.515 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.774 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:00.774 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.774 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.774 [2024-10-15 09:06:18.571797] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:00.774 [2024-10-15 09:06:18.571848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:00.774 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.774 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:00.775 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.775 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.775 [2024-10-15 09:06:18.583837] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:00.775 [2024-10-15 09:06:18.583901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:00.775 [2024-10-15 09:06:18.583913] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:00.775 [2024-10-15 09:06:18.583926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:00.775 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.775 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:00.775 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.775 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.775 [2024-10-15 09:06:18.635205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:00.775 BaseBdev1 00:07:00.775 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.775 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:00.775 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:00.775 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:00.775 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:00.775 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:00.775 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:00.775 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:00.775 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.775 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.775 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.775 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:00.775 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.775 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.775 [ 00:07:00.775 { 00:07:00.775 "name": "BaseBdev1", 00:07:00.775 "aliases": [ 00:07:00.775 "ec03276b-fe5f-4ad3-8ac7-672231296ade" 00:07:00.775 ], 00:07:00.775 "product_name": "Malloc disk", 00:07:00.775 "block_size": 512, 00:07:00.775 "num_blocks": 65536, 00:07:00.775 "uuid": "ec03276b-fe5f-4ad3-8ac7-672231296ade", 00:07:00.775 "assigned_rate_limits": { 00:07:00.775 "rw_ios_per_sec": 0, 00:07:00.775 "rw_mbytes_per_sec": 0, 00:07:00.775 "r_mbytes_per_sec": 0, 00:07:00.775 "w_mbytes_per_sec": 0 00:07:00.775 }, 00:07:00.775 "claimed": true, 00:07:00.775 "claim_type": "exclusive_write", 00:07:00.775 "zoned": false, 00:07:00.775 "supported_io_types": { 00:07:00.775 "read": true, 00:07:00.775 "write": true, 00:07:00.775 "unmap": true, 00:07:00.775 "flush": true, 00:07:00.775 "reset": true, 00:07:00.775 "nvme_admin": false, 00:07:00.775 "nvme_io": false, 00:07:00.775 "nvme_io_md": false, 00:07:00.775 "write_zeroes": true, 00:07:00.775 "zcopy": true, 00:07:00.775 "get_zone_info": false, 00:07:00.775 "zone_management": false, 00:07:00.775 "zone_append": false, 00:07:00.775 "compare": false, 00:07:00.775 "compare_and_write": false, 00:07:00.775 "abort": true, 00:07:00.775 "seek_hole": false, 00:07:00.775 "seek_data": false, 00:07:00.775 "copy": true, 00:07:00.775 "nvme_iov_md": false 00:07:00.775 }, 00:07:00.775 "memory_domains": [ 00:07:00.775 { 00:07:00.775 "dma_device_id": "system", 00:07:00.775 "dma_device_type": 1 00:07:00.775 }, 00:07:00.775 { 00:07:00.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.775 "dma_device_type": 2 00:07:00.775 } 00:07:00.775 ], 00:07:00.775 "driver_specific": {} 00:07:00.775 } 00:07:00.775 ] 00:07:00.775 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.775 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:00.775 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:00.775 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:00.775 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:00.775 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:00.775 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:00.775 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:00.775 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:00.775 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:00.775 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:00.775 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:01.035 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.035 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.035 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.035 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:01.035 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.035 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:01.035 "name": "Existed_Raid", 00:07:01.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:01.035 "strip_size_kb": 64, 00:07:01.035 "state": "configuring", 00:07:01.035 "raid_level": "raid0", 00:07:01.035 "superblock": false, 00:07:01.035 "num_base_bdevs": 2, 00:07:01.035 "num_base_bdevs_discovered": 1, 00:07:01.035 "num_base_bdevs_operational": 2, 00:07:01.035 "base_bdevs_list": [ 00:07:01.035 { 00:07:01.035 "name": "BaseBdev1", 00:07:01.035 "uuid": "ec03276b-fe5f-4ad3-8ac7-672231296ade", 00:07:01.035 "is_configured": true, 00:07:01.035 "data_offset": 0, 00:07:01.035 "data_size": 65536 00:07:01.035 }, 00:07:01.035 { 00:07:01.035 "name": "BaseBdev2", 00:07:01.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:01.035 "is_configured": false, 00:07:01.035 "data_offset": 0, 00:07:01.035 "data_size": 0 00:07:01.035 } 00:07:01.035 ] 00:07:01.035 }' 00:07:01.035 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:01.035 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.295 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:01.295 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.295 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.295 [2024-10-15 09:06:19.146618] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:01.295 [2024-10-15 09:06:19.146810] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:01.295 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.295 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:01.295 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.295 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.295 [2024-10-15 09:06:19.158726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:01.295 [2024-10-15 09:06:19.161005] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:01.295 [2024-10-15 09:06:19.161128] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:01.295 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.295 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:01.295 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:01.295 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:01.295 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:01.295 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:01.295 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:01.295 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:01.295 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:01.295 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:01.295 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:01.295 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:01.295 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:01.295 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:01.295 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.295 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.295 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.295 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.554 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:01.554 "name": "Existed_Raid", 00:07:01.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:01.554 "strip_size_kb": 64, 00:07:01.554 "state": "configuring", 00:07:01.554 "raid_level": "raid0", 00:07:01.554 "superblock": false, 00:07:01.554 "num_base_bdevs": 2, 00:07:01.554 "num_base_bdevs_discovered": 1, 00:07:01.554 "num_base_bdevs_operational": 2, 00:07:01.554 "base_bdevs_list": [ 00:07:01.554 { 00:07:01.554 "name": "BaseBdev1", 00:07:01.554 "uuid": "ec03276b-fe5f-4ad3-8ac7-672231296ade", 00:07:01.554 "is_configured": true, 00:07:01.554 "data_offset": 0, 00:07:01.554 "data_size": 65536 00:07:01.554 }, 00:07:01.554 { 00:07:01.554 "name": "BaseBdev2", 00:07:01.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:01.554 "is_configured": false, 00:07:01.554 "data_offset": 0, 00:07:01.554 "data_size": 0 00:07:01.554 } 00:07:01.554 ] 00:07:01.554 }' 00:07:01.554 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:01.554 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.813 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:01.813 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.813 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.813 [2024-10-15 09:06:19.663684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:01.813 [2024-10-15 09:06:19.663848] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:01.813 [2024-10-15 09:06:19.663880] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:01.813 [2024-10-15 09:06:19.664224] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:01.813 [2024-10-15 09:06:19.664463] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:01.813 [2024-10-15 09:06:19.664522] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:01.813 [2024-10-15 09:06:19.664913] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:01.813 BaseBdev2 00:07:01.813 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.813 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:01.813 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:01.813 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:01.813 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:01.813 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:01.813 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:01.813 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:01.813 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.813 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.813 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.813 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:01.813 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.813 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.813 [ 00:07:01.813 { 00:07:01.813 "name": "BaseBdev2", 00:07:01.813 "aliases": [ 00:07:01.813 "bb44abfe-ea85-4555-b832-f32cc787e295" 00:07:01.813 ], 00:07:01.813 "product_name": "Malloc disk", 00:07:01.813 "block_size": 512, 00:07:01.813 "num_blocks": 65536, 00:07:01.813 "uuid": "bb44abfe-ea85-4555-b832-f32cc787e295", 00:07:01.813 "assigned_rate_limits": { 00:07:01.813 "rw_ios_per_sec": 0, 00:07:01.813 "rw_mbytes_per_sec": 0, 00:07:01.813 "r_mbytes_per_sec": 0, 00:07:01.814 "w_mbytes_per_sec": 0 00:07:01.814 }, 00:07:01.814 "claimed": true, 00:07:01.814 "claim_type": "exclusive_write", 00:07:01.814 "zoned": false, 00:07:01.814 "supported_io_types": { 00:07:01.814 "read": true, 00:07:01.814 "write": true, 00:07:01.814 "unmap": true, 00:07:01.814 "flush": true, 00:07:01.814 "reset": true, 00:07:01.814 "nvme_admin": false, 00:07:01.814 "nvme_io": false, 00:07:01.814 "nvme_io_md": false, 00:07:01.814 "write_zeroes": true, 00:07:01.814 "zcopy": true, 00:07:01.814 "get_zone_info": false, 00:07:01.814 "zone_management": false, 00:07:01.814 "zone_append": false, 00:07:01.814 "compare": false, 00:07:01.814 "compare_and_write": false, 00:07:01.814 "abort": true, 00:07:01.814 "seek_hole": false, 00:07:01.814 "seek_data": false, 00:07:01.814 "copy": true, 00:07:01.814 "nvme_iov_md": false 00:07:01.814 }, 00:07:01.814 "memory_domains": [ 00:07:01.814 { 00:07:01.814 "dma_device_id": "system", 00:07:01.814 "dma_device_type": 1 00:07:01.814 }, 00:07:01.814 { 00:07:01.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.814 "dma_device_type": 2 00:07:01.814 } 00:07:01.814 ], 00:07:01.814 "driver_specific": {} 00:07:01.814 } 00:07:01.814 ] 00:07:01.814 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.814 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:01.814 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:01.814 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:01.814 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:01.814 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:01.814 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:01.814 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:01.814 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:01.814 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:01.814 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:01.814 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:01.814 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:01.814 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:02.073 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.073 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:02.073 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.073 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.073 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.073 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:02.073 "name": "Existed_Raid", 00:07:02.073 "uuid": "100c346b-26dd-4ef7-a28e-1e7950cdab13", 00:07:02.073 "strip_size_kb": 64, 00:07:02.073 "state": "online", 00:07:02.073 "raid_level": "raid0", 00:07:02.073 "superblock": false, 00:07:02.073 "num_base_bdevs": 2, 00:07:02.073 "num_base_bdevs_discovered": 2, 00:07:02.073 "num_base_bdevs_operational": 2, 00:07:02.073 "base_bdevs_list": [ 00:07:02.073 { 00:07:02.073 "name": "BaseBdev1", 00:07:02.073 "uuid": "ec03276b-fe5f-4ad3-8ac7-672231296ade", 00:07:02.073 "is_configured": true, 00:07:02.073 "data_offset": 0, 00:07:02.073 "data_size": 65536 00:07:02.073 }, 00:07:02.073 { 00:07:02.073 "name": "BaseBdev2", 00:07:02.073 "uuid": "bb44abfe-ea85-4555-b832-f32cc787e295", 00:07:02.073 "is_configured": true, 00:07:02.073 "data_offset": 0, 00:07:02.073 "data_size": 65536 00:07:02.073 } 00:07:02.073 ] 00:07:02.073 }' 00:07:02.073 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:02.073 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.332 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:02.332 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:02.332 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:02.332 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:02.332 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:02.332 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:02.332 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:02.332 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:02.332 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.332 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.332 [2024-10-15 09:06:20.155285] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:02.332 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.332 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:02.332 "name": "Existed_Raid", 00:07:02.332 "aliases": [ 00:07:02.332 "100c346b-26dd-4ef7-a28e-1e7950cdab13" 00:07:02.332 ], 00:07:02.332 "product_name": "Raid Volume", 00:07:02.332 "block_size": 512, 00:07:02.332 "num_blocks": 131072, 00:07:02.332 "uuid": "100c346b-26dd-4ef7-a28e-1e7950cdab13", 00:07:02.332 "assigned_rate_limits": { 00:07:02.332 "rw_ios_per_sec": 0, 00:07:02.332 "rw_mbytes_per_sec": 0, 00:07:02.332 "r_mbytes_per_sec": 0, 00:07:02.332 "w_mbytes_per_sec": 0 00:07:02.332 }, 00:07:02.332 "claimed": false, 00:07:02.332 "zoned": false, 00:07:02.332 "supported_io_types": { 00:07:02.332 "read": true, 00:07:02.332 "write": true, 00:07:02.332 "unmap": true, 00:07:02.332 "flush": true, 00:07:02.332 "reset": true, 00:07:02.332 "nvme_admin": false, 00:07:02.332 "nvme_io": false, 00:07:02.332 "nvme_io_md": false, 00:07:02.332 "write_zeroes": true, 00:07:02.332 "zcopy": false, 00:07:02.332 "get_zone_info": false, 00:07:02.333 "zone_management": false, 00:07:02.333 "zone_append": false, 00:07:02.333 "compare": false, 00:07:02.333 "compare_and_write": false, 00:07:02.333 "abort": false, 00:07:02.333 "seek_hole": false, 00:07:02.333 "seek_data": false, 00:07:02.333 "copy": false, 00:07:02.333 "nvme_iov_md": false 00:07:02.333 }, 00:07:02.333 "memory_domains": [ 00:07:02.333 { 00:07:02.333 "dma_device_id": "system", 00:07:02.333 "dma_device_type": 1 00:07:02.333 }, 00:07:02.333 { 00:07:02.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:02.333 "dma_device_type": 2 00:07:02.333 }, 00:07:02.333 { 00:07:02.333 "dma_device_id": "system", 00:07:02.333 "dma_device_type": 1 00:07:02.333 }, 00:07:02.333 { 00:07:02.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:02.333 "dma_device_type": 2 00:07:02.333 } 00:07:02.333 ], 00:07:02.333 "driver_specific": { 00:07:02.333 "raid": { 00:07:02.333 "uuid": "100c346b-26dd-4ef7-a28e-1e7950cdab13", 00:07:02.333 "strip_size_kb": 64, 00:07:02.333 "state": "online", 00:07:02.333 "raid_level": "raid0", 00:07:02.333 "superblock": false, 00:07:02.333 "num_base_bdevs": 2, 00:07:02.333 "num_base_bdevs_discovered": 2, 00:07:02.333 "num_base_bdevs_operational": 2, 00:07:02.333 "base_bdevs_list": [ 00:07:02.333 { 00:07:02.333 "name": "BaseBdev1", 00:07:02.333 "uuid": "ec03276b-fe5f-4ad3-8ac7-672231296ade", 00:07:02.333 "is_configured": true, 00:07:02.333 "data_offset": 0, 00:07:02.333 "data_size": 65536 00:07:02.333 }, 00:07:02.333 { 00:07:02.333 "name": "BaseBdev2", 00:07:02.333 "uuid": "bb44abfe-ea85-4555-b832-f32cc787e295", 00:07:02.333 "is_configured": true, 00:07:02.333 "data_offset": 0, 00:07:02.333 "data_size": 65536 00:07:02.333 } 00:07:02.333 ] 00:07:02.333 } 00:07:02.333 } 00:07:02.333 }' 00:07:02.333 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:02.592 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:02.592 BaseBdev2' 00:07:02.592 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:02.592 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:02.592 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:02.592 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:02.592 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.592 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.592 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:02.592 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.593 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:02.593 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:02.593 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:02.593 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:02.593 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:02.593 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.593 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.593 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.593 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:02.593 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:02.593 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:02.593 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.593 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.593 [2024-10-15 09:06:20.402793] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:02.593 [2024-10-15 09:06:20.402929] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:02.593 [2024-10-15 09:06:20.403046] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:02.852 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.852 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:02.852 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:02.852 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:02.852 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:02.852 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:02.852 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:02.852 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:02.852 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:02.852 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:02.852 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:02.852 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:02.852 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:02.852 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:02.852 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:02.852 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:02.852 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.852 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:02.852 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.852 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.853 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.853 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:02.853 "name": "Existed_Raid", 00:07:02.853 "uuid": "100c346b-26dd-4ef7-a28e-1e7950cdab13", 00:07:02.853 "strip_size_kb": 64, 00:07:02.853 "state": "offline", 00:07:02.853 "raid_level": "raid0", 00:07:02.853 "superblock": false, 00:07:02.853 "num_base_bdevs": 2, 00:07:02.853 "num_base_bdevs_discovered": 1, 00:07:02.853 "num_base_bdevs_operational": 1, 00:07:02.853 "base_bdevs_list": [ 00:07:02.853 { 00:07:02.853 "name": null, 00:07:02.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:02.853 "is_configured": false, 00:07:02.853 "data_offset": 0, 00:07:02.853 "data_size": 65536 00:07:02.853 }, 00:07:02.853 { 00:07:02.853 "name": "BaseBdev2", 00:07:02.853 "uuid": "bb44abfe-ea85-4555-b832-f32cc787e295", 00:07:02.853 "is_configured": true, 00:07:02.853 "data_offset": 0, 00:07:02.853 "data_size": 65536 00:07:02.853 } 00:07:02.853 ] 00:07:02.853 }' 00:07:02.853 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:02.853 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.111 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:03.111 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:03.111 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:03.111 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.111 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.111 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.370 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.370 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:03.370 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:03.370 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:03.370 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.370 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.370 [2024-10-15 09:06:21.031813] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:03.370 [2024-10-15 09:06:21.031978] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:03.370 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.370 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:03.370 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:03.370 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.370 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.370 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.370 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:03.370 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.370 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:03.370 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:03.370 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:03.370 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60705 00:07:03.370 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 60705 ']' 00:07:03.370 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 60705 00:07:03.370 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:03.370 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:03.370 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60705 00:07:03.370 killing process with pid 60705 00:07:03.370 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:03.370 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:03.370 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60705' 00:07:03.370 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 60705 00:07:03.370 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 60705 00:07:03.370 [2024-10-15 09:06:21.239534] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:03.370 [2024-10-15 09:06:21.259860] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:04.747 ************************************ 00:07:04.747 END TEST raid_state_function_test 00:07:04.747 ************************************ 00:07:04.747 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:04.747 00:07:04.747 real 0m5.470s 00:07:04.747 user 0m7.827s 00:07:04.747 sys 0m0.841s 00:07:04.747 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.747 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.747 09:06:22 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:04.747 09:06:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:04.747 09:06:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.747 09:06:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:05.007 ************************************ 00:07:05.007 START TEST raid_state_function_test_sb 00:07:05.007 ************************************ 00:07:05.007 09:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:07:05.007 09:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:05.007 09:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:05.007 09:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:05.007 09:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:05.007 09:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:05.007 09:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:05.007 09:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:05.007 09:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:05.007 09:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:05.007 09:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:05.007 09:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:05.007 09:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:05.007 09:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:05.007 09:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:05.007 09:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:05.007 09:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:05.007 09:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:05.007 09:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:05.007 09:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:05.007 09:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:05.007 09:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:05.007 09:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:05.007 09:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:05.007 Process raid pid: 60958 00:07:05.007 09:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60958 00:07:05.007 09:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60958' 00:07:05.007 09:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60958 00:07:05.008 09:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 60958 ']' 00:07:05.008 09:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.008 09:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:05.008 09:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.008 09:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.008 09:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.008 09:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.008 [2024-10-15 09:06:22.751485] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:07:05.008 [2024-10-15 09:06:22.751625] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:05.268 [2024-10-15 09:06:22.923738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.268 [2024-10-15 09:06:23.067281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.527 [2024-10-15 09:06:23.322610] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:05.527 [2024-10-15 09:06:23.322669] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:05.788 09:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.788 09:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:05.788 09:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:05.788 09:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.788 09:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.788 [2024-10-15 09:06:23.676508] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:05.788 [2024-10-15 09:06:23.676585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:05.788 [2024-10-15 09:06:23.676596] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:05.788 [2024-10-15 09:06:23.676625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:05.788 09:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.788 09:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:05.788 09:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:05.788 09:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:05.788 09:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:05.788 09:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:05.788 09:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:05.788 09:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:05.788 09:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:05.788 09:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:05.788 09:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:06.047 09:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:06.047 09:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.047 09:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.047 09:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.047 09:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.047 09:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:06.047 "name": "Existed_Raid", 00:07:06.047 "uuid": "270939cd-5285-4017-8a19-3fd790e5a631", 00:07:06.047 "strip_size_kb": 64, 00:07:06.047 "state": "configuring", 00:07:06.047 "raid_level": "raid0", 00:07:06.047 "superblock": true, 00:07:06.047 "num_base_bdevs": 2, 00:07:06.047 "num_base_bdevs_discovered": 0, 00:07:06.047 "num_base_bdevs_operational": 2, 00:07:06.047 "base_bdevs_list": [ 00:07:06.047 { 00:07:06.047 "name": "BaseBdev1", 00:07:06.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:06.047 "is_configured": false, 00:07:06.047 "data_offset": 0, 00:07:06.047 "data_size": 0 00:07:06.047 }, 00:07:06.047 { 00:07:06.047 "name": "BaseBdev2", 00:07:06.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:06.047 "is_configured": false, 00:07:06.047 "data_offset": 0, 00:07:06.047 "data_size": 0 00:07:06.047 } 00:07:06.047 ] 00:07:06.047 }' 00:07:06.047 09:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:06.047 09:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.307 [2024-10-15 09:06:24.071791] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:06.307 [2024-10-15 09:06:24.071846] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.307 [2024-10-15 09:06:24.083824] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:06.307 [2024-10-15 09:06:24.083889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:06.307 [2024-10-15 09:06:24.083901] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:06.307 [2024-10-15 09:06:24.083915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.307 [2024-10-15 09:06:24.140096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:06.307 BaseBdev1 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.307 [ 00:07:06.307 { 00:07:06.307 "name": "BaseBdev1", 00:07:06.307 "aliases": [ 00:07:06.307 "0512228a-b57b-4011-b2f3-44a167c8d431" 00:07:06.307 ], 00:07:06.307 "product_name": "Malloc disk", 00:07:06.307 "block_size": 512, 00:07:06.307 "num_blocks": 65536, 00:07:06.307 "uuid": "0512228a-b57b-4011-b2f3-44a167c8d431", 00:07:06.307 "assigned_rate_limits": { 00:07:06.307 "rw_ios_per_sec": 0, 00:07:06.307 "rw_mbytes_per_sec": 0, 00:07:06.307 "r_mbytes_per_sec": 0, 00:07:06.307 "w_mbytes_per_sec": 0 00:07:06.307 }, 00:07:06.307 "claimed": true, 00:07:06.307 "claim_type": "exclusive_write", 00:07:06.307 "zoned": false, 00:07:06.307 "supported_io_types": { 00:07:06.307 "read": true, 00:07:06.307 "write": true, 00:07:06.307 "unmap": true, 00:07:06.307 "flush": true, 00:07:06.307 "reset": true, 00:07:06.307 "nvme_admin": false, 00:07:06.307 "nvme_io": false, 00:07:06.307 "nvme_io_md": false, 00:07:06.307 "write_zeroes": true, 00:07:06.307 "zcopy": true, 00:07:06.307 "get_zone_info": false, 00:07:06.307 "zone_management": false, 00:07:06.307 "zone_append": false, 00:07:06.307 "compare": false, 00:07:06.307 "compare_and_write": false, 00:07:06.307 "abort": true, 00:07:06.307 "seek_hole": false, 00:07:06.307 "seek_data": false, 00:07:06.307 "copy": true, 00:07:06.307 "nvme_iov_md": false 00:07:06.307 }, 00:07:06.307 "memory_domains": [ 00:07:06.307 { 00:07:06.307 "dma_device_id": "system", 00:07:06.307 "dma_device_type": 1 00:07:06.307 }, 00:07:06.307 { 00:07:06.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.307 "dma_device_type": 2 00:07:06.307 } 00:07:06.307 ], 00:07:06.307 "driver_specific": {} 00:07:06.307 } 00:07:06.307 ] 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.307 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.567 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:06.567 "name": "Existed_Raid", 00:07:06.567 "uuid": "ac80ca26-5db0-481a-99ce-1db5447007b0", 00:07:06.567 "strip_size_kb": 64, 00:07:06.567 "state": "configuring", 00:07:06.567 "raid_level": "raid0", 00:07:06.567 "superblock": true, 00:07:06.567 "num_base_bdevs": 2, 00:07:06.567 "num_base_bdevs_discovered": 1, 00:07:06.567 "num_base_bdevs_operational": 2, 00:07:06.567 "base_bdevs_list": [ 00:07:06.567 { 00:07:06.567 "name": "BaseBdev1", 00:07:06.567 "uuid": "0512228a-b57b-4011-b2f3-44a167c8d431", 00:07:06.567 "is_configured": true, 00:07:06.567 "data_offset": 2048, 00:07:06.567 "data_size": 63488 00:07:06.567 }, 00:07:06.567 { 00:07:06.567 "name": "BaseBdev2", 00:07:06.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:06.567 "is_configured": false, 00:07:06.567 "data_offset": 0, 00:07:06.567 "data_size": 0 00:07:06.567 } 00:07:06.567 ] 00:07:06.567 }' 00:07:06.567 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:06.567 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.827 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:06.827 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.827 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.827 [2024-10-15 09:06:24.635420] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:06.827 [2024-10-15 09:06:24.635577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:06.827 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.827 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:06.827 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.827 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.827 [2024-10-15 09:06:24.647492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:06.827 [2024-10-15 09:06:24.649774] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:06.827 [2024-10-15 09:06:24.649878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:06.827 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.827 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:06.827 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:06.827 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:06.827 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:06.827 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:06.827 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:06.827 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:06.827 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:06.827 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:06.827 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:06.827 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:06.827 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:06.827 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.827 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.827 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.827 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:06.827 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.827 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:06.827 "name": "Existed_Raid", 00:07:06.827 "uuid": "8180cef6-4cd6-45da-ad43-4dd8b800353a", 00:07:06.827 "strip_size_kb": 64, 00:07:06.827 "state": "configuring", 00:07:06.827 "raid_level": "raid0", 00:07:06.827 "superblock": true, 00:07:06.827 "num_base_bdevs": 2, 00:07:06.827 "num_base_bdevs_discovered": 1, 00:07:06.827 "num_base_bdevs_operational": 2, 00:07:06.827 "base_bdevs_list": [ 00:07:06.827 { 00:07:06.827 "name": "BaseBdev1", 00:07:06.827 "uuid": "0512228a-b57b-4011-b2f3-44a167c8d431", 00:07:06.827 "is_configured": true, 00:07:06.827 "data_offset": 2048, 00:07:06.827 "data_size": 63488 00:07:06.827 }, 00:07:06.827 { 00:07:06.827 "name": "BaseBdev2", 00:07:06.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:06.827 "is_configured": false, 00:07:06.827 "data_offset": 0, 00:07:06.827 "data_size": 0 00:07:06.827 } 00:07:06.827 ] 00:07:06.827 }' 00:07:06.827 09:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:06.827 09:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.395 [2024-10-15 09:06:25.173576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:07.395 [2024-10-15 09:06:25.173931] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:07.395 [2024-10-15 09:06:25.173956] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:07.395 BaseBdev2 00:07:07.395 [2024-10-15 09:06:25.174274] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:07.395 [2024-10-15 09:06:25.174459] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:07.395 [2024-10-15 09:06:25.174476] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:07.395 [2024-10-15 09:06:25.174648] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.395 [ 00:07:07.395 { 00:07:07.395 "name": "BaseBdev2", 00:07:07.395 "aliases": [ 00:07:07.395 "f3b6a764-17db-44a5-9c9a-6fe4f7bb89d7" 00:07:07.395 ], 00:07:07.395 "product_name": "Malloc disk", 00:07:07.395 "block_size": 512, 00:07:07.395 "num_blocks": 65536, 00:07:07.395 "uuid": "f3b6a764-17db-44a5-9c9a-6fe4f7bb89d7", 00:07:07.395 "assigned_rate_limits": { 00:07:07.395 "rw_ios_per_sec": 0, 00:07:07.395 "rw_mbytes_per_sec": 0, 00:07:07.395 "r_mbytes_per_sec": 0, 00:07:07.395 "w_mbytes_per_sec": 0 00:07:07.395 }, 00:07:07.395 "claimed": true, 00:07:07.395 "claim_type": "exclusive_write", 00:07:07.395 "zoned": false, 00:07:07.395 "supported_io_types": { 00:07:07.395 "read": true, 00:07:07.395 "write": true, 00:07:07.395 "unmap": true, 00:07:07.395 "flush": true, 00:07:07.395 "reset": true, 00:07:07.395 "nvme_admin": false, 00:07:07.395 "nvme_io": false, 00:07:07.395 "nvme_io_md": false, 00:07:07.395 "write_zeroes": true, 00:07:07.395 "zcopy": true, 00:07:07.395 "get_zone_info": false, 00:07:07.395 "zone_management": false, 00:07:07.395 "zone_append": false, 00:07:07.395 "compare": false, 00:07:07.395 "compare_and_write": false, 00:07:07.395 "abort": true, 00:07:07.395 "seek_hole": false, 00:07:07.395 "seek_data": false, 00:07:07.395 "copy": true, 00:07:07.395 "nvme_iov_md": false 00:07:07.395 }, 00:07:07.395 "memory_domains": [ 00:07:07.395 { 00:07:07.395 "dma_device_id": "system", 00:07:07.395 "dma_device_type": 1 00:07:07.395 }, 00:07:07.395 { 00:07:07.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.395 "dma_device_type": 2 00:07:07.395 } 00:07:07.395 ], 00:07:07.395 "driver_specific": {} 00:07:07.395 } 00:07:07.395 ] 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:07.395 "name": "Existed_Raid", 00:07:07.395 "uuid": "8180cef6-4cd6-45da-ad43-4dd8b800353a", 00:07:07.395 "strip_size_kb": 64, 00:07:07.395 "state": "online", 00:07:07.395 "raid_level": "raid0", 00:07:07.395 "superblock": true, 00:07:07.395 "num_base_bdevs": 2, 00:07:07.395 "num_base_bdevs_discovered": 2, 00:07:07.395 "num_base_bdevs_operational": 2, 00:07:07.395 "base_bdevs_list": [ 00:07:07.395 { 00:07:07.395 "name": "BaseBdev1", 00:07:07.395 "uuid": "0512228a-b57b-4011-b2f3-44a167c8d431", 00:07:07.395 "is_configured": true, 00:07:07.395 "data_offset": 2048, 00:07:07.395 "data_size": 63488 00:07:07.395 }, 00:07:07.395 { 00:07:07.395 "name": "BaseBdev2", 00:07:07.395 "uuid": "f3b6a764-17db-44a5-9c9a-6fe4f7bb89d7", 00:07:07.395 "is_configured": true, 00:07:07.395 "data_offset": 2048, 00:07:07.395 "data_size": 63488 00:07:07.395 } 00:07:07.395 ] 00:07:07.395 }' 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:07.395 09:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.962 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:07.962 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:07.962 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:07.962 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:07.962 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:07.962 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:07.962 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:07.962 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:07.962 09:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.962 09:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.962 [2024-10-15 09:06:25.681970] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:07.962 09:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.962 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:07.962 "name": "Existed_Raid", 00:07:07.962 "aliases": [ 00:07:07.962 "8180cef6-4cd6-45da-ad43-4dd8b800353a" 00:07:07.962 ], 00:07:07.962 "product_name": "Raid Volume", 00:07:07.962 "block_size": 512, 00:07:07.962 "num_blocks": 126976, 00:07:07.962 "uuid": "8180cef6-4cd6-45da-ad43-4dd8b800353a", 00:07:07.962 "assigned_rate_limits": { 00:07:07.962 "rw_ios_per_sec": 0, 00:07:07.962 "rw_mbytes_per_sec": 0, 00:07:07.962 "r_mbytes_per_sec": 0, 00:07:07.962 "w_mbytes_per_sec": 0 00:07:07.962 }, 00:07:07.962 "claimed": false, 00:07:07.962 "zoned": false, 00:07:07.962 "supported_io_types": { 00:07:07.962 "read": true, 00:07:07.962 "write": true, 00:07:07.962 "unmap": true, 00:07:07.962 "flush": true, 00:07:07.962 "reset": true, 00:07:07.962 "nvme_admin": false, 00:07:07.962 "nvme_io": false, 00:07:07.962 "nvme_io_md": false, 00:07:07.962 "write_zeroes": true, 00:07:07.962 "zcopy": false, 00:07:07.962 "get_zone_info": false, 00:07:07.962 "zone_management": false, 00:07:07.962 "zone_append": false, 00:07:07.962 "compare": false, 00:07:07.962 "compare_and_write": false, 00:07:07.962 "abort": false, 00:07:07.962 "seek_hole": false, 00:07:07.962 "seek_data": false, 00:07:07.962 "copy": false, 00:07:07.962 "nvme_iov_md": false 00:07:07.962 }, 00:07:07.962 "memory_domains": [ 00:07:07.962 { 00:07:07.962 "dma_device_id": "system", 00:07:07.962 "dma_device_type": 1 00:07:07.962 }, 00:07:07.962 { 00:07:07.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.962 "dma_device_type": 2 00:07:07.962 }, 00:07:07.962 { 00:07:07.962 "dma_device_id": "system", 00:07:07.962 "dma_device_type": 1 00:07:07.962 }, 00:07:07.962 { 00:07:07.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.962 "dma_device_type": 2 00:07:07.962 } 00:07:07.962 ], 00:07:07.962 "driver_specific": { 00:07:07.962 "raid": { 00:07:07.962 "uuid": "8180cef6-4cd6-45da-ad43-4dd8b800353a", 00:07:07.962 "strip_size_kb": 64, 00:07:07.962 "state": "online", 00:07:07.962 "raid_level": "raid0", 00:07:07.962 "superblock": true, 00:07:07.962 "num_base_bdevs": 2, 00:07:07.962 "num_base_bdevs_discovered": 2, 00:07:07.962 "num_base_bdevs_operational": 2, 00:07:07.962 "base_bdevs_list": [ 00:07:07.962 { 00:07:07.962 "name": "BaseBdev1", 00:07:07.962 "uuid": "0512228a-b57b-4011-b2f3-44a167c8d431", 00:07:07.962 "is_configured": true, 00:07:07.962 "data_offset": 2048, 00:07:07.962 "data_size": 63488 00:07:07.962 }, 00:07:07.962 { 00:07:07.962 "name": "BaseBdev2", 00:07:07.962 "uuid": "f3b6a764-17db-44a5-9c9a-6fe4f7bb89d7", 00:07:07.962 "is_configured": true, 00:07:07.962 "data_offset": 2048, 00:07:07.962 "data_size": 63488 00:07:07.962 } 00:07:07.962 ] 00:07:07.962 } 00:07:07.962 } 00:07:07.962 }' 00:07:07.962 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:07.962 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:07.962 BaseBdev2' 00:07:07.962 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:07.962 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:07.962 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:07.962 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:07.962 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:07.962 09:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.962 09:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.962 09:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.962 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:07.962 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:07.962 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:07.962 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:07.962 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:07.962 09:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.962 09:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.962 09:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.220 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:08.220 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:08.220 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:08.220 09:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.220 09:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.220 [2024-10-15 09:06:25.873715] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:08.220 [2024-10-15 09:06:25.873763] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:08.220 [2024-10-15 09:06:25.873825] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:08.220 09:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.220 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:08.220 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:08.220 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:08.220 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:08.220 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:08.220 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:08.220 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:08.220 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:08.220 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:08.220 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:08.220 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:08.220 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:08.220 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:08.220 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:08.220 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:08.220 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.220 09:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:08.220 09:06:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.220 09:06:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.220 09:06:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.220 09:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:08.220 "name": "Existed_Raid", 00:07:08.220 "uuid": "8180cef6-4cd6-45da-ad43-4dd8b800353a", 00:07:08.220 "strip_size_kb": 64, 00:07:08.220 "state": "offline", 00:07:08.220 "raid_level": "raid0", 00:07:08.220 "superblock": true, 00:07:08.220 "num_base_bdevs": 2, 00:07:08.220 "num_base_bdevs_discovered": 1, 00:07:08.220 "num_base_bdevs_operational": 1, 00:07:08.220 "base_bdevs_list": [ 00:07:08.220 { 00:07:08.220 "name": null, 00:07:08.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.220 "is_configured": false, 00:07:08.220 "data_offset": 0, 00:07:08.220 "data_size": 63488 00:07:08.220 }, 00:07:08.220 { 00:07:08.220 "name": "BaseBdev2", 00:07:08.220 "uuid": "f3b6a764-17db-44a5-9c9a-6fe4f7bb89d7", 00:07:08.220 "is_configured": true, 00:07:08.220 "data_offset": 2048, 00:07:08.220 "data_size": 63488 00:07:08.220 } 00:07:08.220 ] 00:07:08.220 }' 00:07:08.220 09:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:08.220 09:06:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.788 09:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:08.788 09:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:08.788 09:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.788 09:06:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.788 09:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:08.788 09:06:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.788 09:06:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.788 09:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:08.788 09:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:08.788 09:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:08.788 09:06:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.788 09:06:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.788 [2024-10-15 09:06:26.461784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:08.788 [2024-10-15 09:06:26.461856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:08.788 09:06:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.788 09:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:08.788 09:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:08.788 09:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:08.788 09:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.788 09:06:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.788 09:06:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.788 09:06:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.788 09:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:08.788 09:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:08.788 09:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:08.788 09:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60958 00:07:08.788 09:06:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 60958 ']' 00:07:08.788 09:06:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 60958 00:07:08.788 09:06:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:08.788 09:06:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:08.788 09:06:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60958 00:07:08.788 killing process with pid 60958 00:07:08.788 09:06:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:08.788 09:06:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:08.788 09:06:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60958' 00:07:08.788 09:06:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 60958 00:07:08.788 [2024-10-15 09:06:26.662704] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:08.788 09:06:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 60958 00:07:08.788 [2024-10-15 09:06:26.683455] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:10.171 09:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:10.171 00:07:10.171 real 0m5.282s 00:07:10.171 user 0m7.547s 00:07:10.171 sys 0m0.841s 00:07:10.171 09:06:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.171 09:06:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:10.171 ************************************ 00:07:10.171 END TEST raid_state_function_test_sb 00:07:10.171 ************************************ 00:07:10.171 09:06:27 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:10.171 09:06:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:10.171 09:06:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.171 09:06:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:10.171 ************************************ 00:07:10.171 START TEST raid_superblock_test 00:07:10.171 ************************************ 00:07:10.171 09:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:07:10.171 09:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:10.171 09:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:10.171 09:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:10.171 09:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:10.171 09:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:10.171 09:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:10.171 09:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:10.171 09:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:10.171 09:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:10.171 09:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:10.171 09:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:10.171 09:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:10.171 09:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:10.171 09:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:10.171 09:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:10.171 09:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:10.171 09:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61216 00:07:10.171 09:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:10.171 09:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61216 00:07:10.171 09:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 61216 ']' 00:07:10.171 09:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.171 09:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.171 09:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.171 09:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.171 09:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.436 [2024-10-15 09:06:28.093873] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:07:10.436 [2024-10-15 09:06:28.094011] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61216 ] 00:07:10.436 [2024-10-15 09:06:28.260213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.694 [2024-10-15 09:06:28.398606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.953 [2024-10-15 09:06:28.628156] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:10.953 [2024-10-15 09:06:28.628214] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.213 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:11.213 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:11.213 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:11.213 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:11.213 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:11.213 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:11.213 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:11.213 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:11.213 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:11.214 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:11.214 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:11.214 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.214 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.214 malloc1 00:07:11.214 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.214 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:11.214 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.214 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.214 [2024-10-15 09:06:29.093066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:11.214 [2024-10-15 09:06:29.093251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:11.214 [2024-10-15 09:06:29.093302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:11.214 [2024-10-15 09:06:29.093341] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:11.214 [2024-10-15 09:06:29.095942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:11.214 [2024-10-15 09:06:29.096064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:11.214 pt1 00:07:11.214 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.214 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:11.214 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:11.214 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:11.214 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:11.214 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:11.214 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:11.214 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:11.214 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:11.214 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:11.214 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.214 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.472 malloc2 00:07:11.472 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.472 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:11.472 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.472 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.472 [2024-10-15 09:06:29.160840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:11.472 [2024-10-15 09:06:29.161001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:11.472 [2024-10-15 09:06:29.161051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:11.472 [2024-10-15 09:06:29.161091] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:11.472 [2024-10-15 09:06:29.163724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:11.472 [2024-10-15 09:06:29.163816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:11.472 pt2 00:07:11.472 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.472 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:11.472 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:11.472 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:11.472 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.472 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.472 [2024-10-15 09:06:29.172925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:11.472 [2024-10-15 09:06:29.175105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:11.472 [2024-10-15 09:06:29.175372] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:11.472 [2024-10-15 09:06:29.175392] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:11.472 [2024-10-15 09:06:29.175749] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:11.472 [2024-10-15 09:06:29.175939] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:11.472 [2024-10-15 09:06:29.175953] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:11.472 [2024-10-15 09:06:29.176158] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:11.472 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.472 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:11.472 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:11.472 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:11.472 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:11.472 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:11.472 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:11.473 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:11.473 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:11.473 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:11.473 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:11.473 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.473 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:11.473 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.473 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.473 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.473 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:11.473 "name": "raid_bdev1", 00:07:11.473 "uuid": "325ab88a-c569-49f9-a50d-a6901da3af3f", 00:07:11.473 "strip_size_kb": 64, 00:07:11.473 "state": "online", 00:07:11.473 "raid_level": "raid0", 00:07:11.473 "superblock": true, 00:07:11.473 "num_base_bdevs": 2, 00:07:11.473 "num_base_bdevs_discovered": 2, 00:07:11.473 "num_base_bdevs_operational": 2, 00:07:11.473 "base_bdevs_list": [ 00:07:11.473 { 00:07:11.473 "name": "pt1", 00:07:11.473 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:11.473 "is_configured": true, 00:07:11.473 "data_offset": 2048, 00:07:11.473 "data_size": 63488 00:07:11.473 }, 00:07:11.473 { 00:07:11.473 "name": "pt2", 00:07:11.473 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:11.473 "is_configured": true, 00:07:11.473 "data_offset": 2048, 00:07:11.473 "data_size": 63488 00:07:11.473 } 00:07:11.473 ] 00:07:11.473 }' 00:07:11.473 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:11.473 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:12.040 [2024-10-15 09:06:29.644458] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:12.040 "name": "raid_bdev1", 00:07:12.040 "aliases": [ 00:07:12.040 "325ab88a-c569-49f9-a50d-a6901da3af3f" 00:07:12.040 ], 00:07:12.040 "product_name": "Raid Volume", 00:07:12.040 "block_size": 512, 00:07:12.040 "num_blocks": 126976, 00:07:12.040 "uuid": "325ab88a-c569-49f9-a50d-a6901da3af3f", 00:07:12.040 "assigned_rate_limits": { 00:07:12.040 "rw_ios_per_sec": 0, 00:07:12.040 "rw_mbytes_per_sec": 0, 00:07:12.040 "r_mbytes_per_sec": 0, 00:07:12.040 "w_mbytes_per_sec": 0 00:07:12.040 }, 00:07:12.040 "claimed": false, 00:07:12.040 "zoned": false, 00:07:12.040 "supported_io_types": { 00:07:12.040 "read": true, 00:07:12.040 "write": true, 00:07:12.040 "unmap": true, 00:07:12.040 "flush": true, 00:07:12.040 "reset": true, 00:07:12.040 "nvme_admin": false, 00:07:12.040 "nvme_io": false, 00:07:12.040 "nvme_io_md": false, 00:07:12.040 "write_zeroes": true, 00:07:12.040 "zcopy": false, 00:07:12.040 "get_zone_info": false, 00:07:12.040 "zone_management": false, 00:07:12.040 "zone_append": false, 00:07:12.040 "compare": false, 00:07:12.040 "compare_and_write": false, 00:07:12.040 "abort": false, 00:07:12.040 "seek_hole": false, 00:07:12.040 "seek_data": false, 00:07:12.040 "copy": false, 00:07:12.040 "nvme_iov_md": false 00:07:12.040 }, 00:07:12.040 "memory_domains": [ 00:07:12.040 { 00:07:12.040 "dma_device_id": "system", 00:07:12.040 "dma_device_type": 1 00:07:12.040 }, 00:07:12.040 { 00:07:12.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.040 "dma_device_type": 2 00:07:12.040 }, 00:07:12.040 { 00:07:12.040 "dma_device_id": "system", 00:07:12.040 "dma_device_type": 1 00:07:12.040 }, 00:07:12.040 { 00:07:12.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.040 "dma_device_type": 2 00:07:12.040 } 00:07:12.040 ], 00:07:12.040 "driver_specific": { 00:07:12.040 "raid": { 00:07:12.040 "uuid": "325ab88a-c569-49f9-a50d-a6901da3af3f", 00:07:12.040 "strip_size_kb": 64, 00:07:12.040 "state": "online", 00:07:12.040 "raid_level": "raid0", 00:07:12.040 "superblock": true, 00:07:12.040 "num_base_bdevs": 2, 00:07:12.040 "num_base_bdevs_discovered": 2, 00:07:12.040 "num_base_bdevs_operational": 2, 00:07:12.040 "base_bdevs_list": [ 00:07:12.040 { 00:07:12.040 "name": "pt1", 00:07:12.040 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:12.040 "is_configured": true, 00:07:12.040 "data_offset": 2048, 00:07:12.040 "data_size": 63488 00:07:12.040 }, 00:07:12.040 { 00:07:12.040 "name": "pt2", 00:07:12.040 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:12.040 "is_configured": true, 00:07:12.040 "data_offset": 2048, 00:07:12.040 "data_size": 63488 00:07:12.040 } 00:07:12.040 ] 00:07:12.040 } 00:07:12.040 } 00:07:12.040 }' 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:12.040 pt2' 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.040 [2024-10-15 09:06:29.900036] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:12.040 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.331 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=325ab88a-c569-49f9-a50d-a6901da3af3f 00:07:12.331 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 325ab88a-c569-49f9-a50d-a6901da3af3f ']' 00:07:12.331 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:12.331 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.331 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.331 [2024-10-15 09:06:29.951641] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:12.331 [2024-10-15 09:06:29.951676] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:12.331 [2024-10-15 09:06:29.951795] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:12.331 [2024-10-15 09:06:29.951851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:12.331 [2024-10-15 09:06:29.951864] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:12.331 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.331 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:12.331 09:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.331 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.331 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.331 09:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.331 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:12.331 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:12.331 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:12.331 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:12.331 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.331 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.331 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.331 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:12.331 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:12.331 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.331 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.331 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.331 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:12.331 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:12.331 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.331 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.331 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.331 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:12.331 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:12.331 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.332 [2024-10-15 09:06:30.095435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:12.332 [2024-10-15 09:06:30.097661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:12.332 [2024-10-15 09:06:30.097761] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:12.332 [2024-10-15 09:06:30.097821] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:12.332 [2024-10-15 09:06:30.097839] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:12.332 [2024-10-15 09:06:30.097851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:12.332 request: 00:07:12.332 { 00:07:12.332 "name": "raid_bdev1", 00:07:12.332 "raid_level": "raid0", 00:07:12.332 "base_bdevs": [ 00:07:12.332 "malloc1", 00:07:12.332 "malloc2" 00:07:12.332 ], 00:07:12.332 "strip_size_kb": 64, 00:07:12.332 "superblock": false, 00:07:12.332 "method": "bdev_raid_create", 00:07:12.332 "req_id": 1 00:07:12.332 } 00:07:12.332 Got JSON-RPC error response 00:07:12.332 response: 00:07:12.332 { 00:07:12.332 "code": -17, 00:07:12.332 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:12.332 } 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.332 [2024-10-15 09:06:30.159292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:12.332 [2024-10-15 09:06:30.159462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:12.332 [2024-10-15 09:06:30.159509] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:12.332 [2024-10-15 09:06:30.159574] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:12.332 [2024-10-15 09:06:30.162322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:12.332 [2024-10-15 09:06:30.162452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:12.332 [2024-10-15 09:06:30.162637] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:12.332 [2024-10-15 09:06:30.162839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:12.332 pt1 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.332 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.607 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.607 "name": "raid_bdev1", 00:07:12.607 "uuid": "325ab88a-c569-49f9-a50d-a6901da3af3f", 00:07:12.607 "strip_size_kb": 64, 00:07:12.607 "state": "configuring", 00:07:12.607 "raid_level": "raid0", 00:07:12.607 "superblock": true, 00:07:12.607 "num_base_bdevs": 2, 00:07:12.607 "num_base_bdevs_discovered": 1, 00:07:12.607 "num_base_bdevs_operational": 2, 00:07:12.607 "base_bdevs_list": [ 00:07:12.607 { 00:07:12.607 "name": "pt1", 00:07:12.607 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:12.607 "is_configured": true, 00:07:12.607 "data_offset": 2048, 00:07:12.607 "data_size": 63488 00:07:12.607 }, 00:07:12.607 { 00:07:12.607 "name": null, 00:07:12.607 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:12.607 "is_configured": false, 00:07:12.607 "data_offset": 2048, 00:07:12.607 "data_size": 63488 00:07:12.607 } 00:07:12.607 ] 00:07:12.607 }' 00:07:12.607 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.607 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.866 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:12.866 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:12.866 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:12.866 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:12.866 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.866 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.866 [2024-10-15 09:06:30.626484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:12.866 [2024-10-15 09:06:30.626656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:12.866 [2024-10-15 09:06:30.626744] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:12.866 [2024-10-15 09:06:30.626786] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:12.866 [2024-10-15 09:06:30.627398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:12.866 [2024-10-15 09:06:30.627482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:12.866 [2024-10-15 09:06:30.627611] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:12.866 [2024-10-15 09:06:30.627672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:12.866 [2024-10-15 09:06:30.627867] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:12.866 [2024-10-15 09:06:30.627914] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:12.866 [2024-10-15 09:06:30.628223] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:12.866 [2024-10-15 09:06:30.628443] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:12.866 [2024-10-15 09:06:30.628493] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:12.866 [2024-10-15 09:06:30.628717] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.866 pt2 00:07:12.866 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.866 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:12.866 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:12.866 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:12.866 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:12.866 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:12.866 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:12.866 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:12.866 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:12.866 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.866 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.866 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.866 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.866 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:12.866 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.866 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.866 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.866 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.866 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.866 "name": "raid_bdev1", 00:07:12.866 "uuid": "325ab88a-c569-49f9-a50d-a6901da3af3f", 00:07:12.866 "strip_size_kb": 64, 00:07:12.866 "state": "online", 00:07:12.866 "raid_level": "raid0", 00:07:12.866 "superblock": true, 00:07:12.866 "num_base_bdevs": 2, 00:07:12.866 "num_base_bdevs_discovered": 2, 00:07:12.866 "num_base_bdevs_operational": 2, 00:07:12.866 "base_bdevs_list": [ 00:07:12.866 { 00:07:12.866 "name": "pt1", 00:07:12.866 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:12.866 "is_configured": true, 00:07:12.866 "data_offset": 2048, 00:07:12.866 "data_size": 63488 00:07:12.866 }, 00:07:12.866 { 00:07:12.866 "name": "pt2", 00:07:12.866 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:12.866 "is_configured": true, 00:07:12.866 "data_offset": 2048, 00:07:12.866 "data_size": 63488 00:07:12.866 } 00:07:12.866 ] 00:07:12.866 }' 00:07:12.866 09:06:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.866 09:06:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.433 09:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:13.433 09:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:13.433 09:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:13.433 09:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:13.433 09:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:13.433 09:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:13.433 09:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:13.433 09:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:13.433 09:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.433 09:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.433 [2024-10-15 09:06:31.102020] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:13.433 09:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.433 09:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:13.433 "name": "raid_bdev1", 00:07:13.433 "aliases": [ 00:07:13.433 "325ab88a-c569-49f9-a50d-a6901da3af3f" 00:07:13.433 ], 00:07:13.433 "product_name": "Raid Volume", 00:07:13.433 "block_size": 512, 00:07:13.433 "num_blocks": 126976, 00:07:13.433 "uuid": "325ab88a-c569-49f9-a50d-a6901da3af3f", 00:07:13.433 "assigned_rate_limits": { 00:07:13.433 "rw_ios_per_sec": 0, 00:07:13.433 "rw_mbytes_per_sec": 0, 00:07:13.433 "r_mbytes_per_sec": 0, 00:07:13.433 "w_mbytes_per_sec": 0 00:07:13.433 }, 00:07:13.433 "claimed": false, 00:07:13.433 "zoned": false, 00:07:13.433 "supported_io_types": { 00:07:13.433 "read": true, 00:07:13.433 "write": true, 00:07:13.433 "unmap": true, 00:07:13.433 "flush": true, 00:07:13.433 "reset": true, 00:07:13.433 "nvme_admin": false, 00:07:13.433 "nvme_io": false, 00:07:13.433 "nvme_io_md": false, 00:07:13.433 "write_zeroes": true, 00:07:13.433 "zcopy": false, 00:07:13.433 "get_zone_info": false, 00:07:13.433 "zone_management": false, 00:07:13.433 "zone_append": false, 00:07:13.433 "compare": false, 00:07:13.433 "compare_and_write": false, 00:07:13.433 "abort": false, 00:07:13.433 "seek_hole": false, 00:07:13.433 "seek_data": false, 00:07:13.433 "copy": false, 00:07:13.433 "nvme_iov_md": false 00:07:13.433 }, 00:07:13.433 "memory_domains": [ 00:07:13.433 { 00:07:13.433 "dma_device_id": "system", 00:07:13.433 "dma_device_type": 1 00:07:13.433 }, 00:07:13.433 { 00:07:13.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.433 "dma_device_type": 2 00:07:13.433 }, 00:07:13.433 { 00:07:13.433 "dma_device_id": "system", 00:07:13.433 "dma_device_type": 1 00:07:13.433 }, 00:07:13.433 { 00:07:13.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.433 "dma_device_type": 2 00:07:13.433 } 00:07:13.433 ], 00:07:13.433 "driver_specific": { 00:07:13.433 "raid": { 00:07:13.433 "uuid": "325ab88a-c569-49f9-a50d-a6901da3af3f", 00:07:13.433 "strip_size_kb": 64, 00:07:13.433 "state": "online", 00:07:13.433 "raid_level": "raid0", 00:07:13.433 "superblock": true, 00:07:13.433 "num_base_bdevs": 2, 00:07:13.433 "num_base_bdevs_discovered": 2, 00:07:13.433 "num_base_bdevs_operational": 2, 00:07:13.433 "base_bdevs_list": [ 00:07:13.433 { 00:07:13.433 "name": "pt1", 00:07:13.433 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:13.433 "is_configured": true, 00:07:13.433 "data_offset": 2048, 00:07:13.433 "data_size": 63488 00:07:13.433 }, 00:07:13.433 { 00:07:13.433 "name": "pt2", 00:07:13.433 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:13.433 "is_configured": true, 00:07:13.433 "data_offset": 2048, 00:07:13.433 "data_size": 63488 00:07:13.433 } 00:07:13.433 ] 00:07:13.433 } 00:07:13.433 } 00:07:13.433 }' 00:07:13.433 09:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:13.433 09:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:13.433 pt2' 00:07:13.433 09:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:13.433 09:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:13.433 09:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:13.433 09:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:13.433 09:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:13.433 09:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.433 09:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.433 09:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.433 09:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:13.433 09:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:13.433 09:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:13.433 09:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:13.433 09:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:13.433 09:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.433 09:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.433 09:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.433 09:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:13.434 09:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:13.434 09:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:13.434 09:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:13.434 09:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.434 09:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.692 [2024-10-15 09:06:31.329989] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:13.692 09:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.692 09:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 325ab88a-c569-49f9-a50d-a6901da3af3f '!=' 325ab88a-c569-49f9-a50d-a6901da3af3f ']' 00:07:13.692 09:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:13.692 09:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:13.692 09:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:13.692 09:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61216 00:07:13.692 09:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 61216 ']' 00:07:13.692 09:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 61216 00:07:13.692 09:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:13.692 09:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:13.692 09:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61216 00:07:13.692 killing process with pid 61216 00:07:13.692 09:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:13.692 09:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:13.692 09:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61216' 00:07:13.692 09:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 61216 00:07:13.692 [2024-10-15 09:06:31.402264] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:13.692 [2024-10-15 09:06:31.402397] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:13.692 09:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 61216 00:07:13.692 [2024-10-15 09:06:31.402462] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:13.692 [2024-10-15 09:06:31.402484] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:13.949 [2024-10-15 09:06:31.651089] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:15.417 09:06:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:15.417 00:07:15.417 real 0m4.874s 00:07:15.417 user 0m6.871s 00:07:15.417 sys 0m0.784s 00:07:15.417 09:06:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.417 ************************************ 00:07:15.417 END TEST raid_superblock_test 00:07:15.417 ************************************ 00:07:15.417 09:06:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.417 09:06:32 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:15.417 09:06:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:15.417 09:06:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.417 09:06:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:15.417 ************************************ 00:07:15.417 START TEST raid_read_error_test 00:07:15.417 ************************************ 00:07:15.417 09:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:07:15.417 09:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:15.417 09:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:15.417 09:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:15.417 09:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:15.417 09:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:15.417 09:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:15.417 09:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:15.417 09:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:15.417 09:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:15.417 09:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:15.417 09:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:15.417 09:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:15.417 09:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:15.417 09:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:15.417 09:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:15.417 09:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:15.418 09:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:15.418 09:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:15.418 09:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:15.418 09:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:15.418 09:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:15.418 09:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:15.418 09:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6anK3VZY1Q 00:07:15.418 09:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61427 00:07:15.418 09:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:15.418 09:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61427 00:07:15.418 09:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 61427 ']' 00:07:15.418 09:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.418 09:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.418 09:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.418 09:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.418 09:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.418 [2024-10-15 09:06:33.049225] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:07:15.418 [2024-10-15 09:06:33.049460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61427 ] 00:07:15.418 [2024-10-15 09:06:33.215863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.677 [2024-10-15 09:06:33.341724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.677 [2024-10-15 09:06:33.563569] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:15.677 [2024-10-15 09:06:33.563623] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.245 09:06:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.245 09:06:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:16.245 09:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:16.245 09:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:16.245 09:06:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.245 09:06:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.245 BaseBdev1_malloc 00:07:16.245 09:06:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.245 09:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:16.245 09:06:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.245 09:06:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.245 true 00:07:16.245 09:06:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.245 09:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:16.245 09:06:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.245 09:06:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.245 [2024-10-15 09:06:33.993866] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:16.245 [2024-10-15 09:06:33.993938] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:16.245 [2024-10-15 09:06:33.993965] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:16.245 [2024-10-15 09:06:33.993979] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:16.245 [2024-10-15 09:06:33.996618] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:16.245 [2024-10-15 09:06:33.996681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:16.245 BaseBdev1 00:07:16.245 09:06:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.245 09:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:16.245 09:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:16.245 09:06:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.245 09:06:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.245 BaseBdev2_malloc 00:07:16.245 09:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.245 09:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:16.245 09:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.245 09:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.245 true 00:07:16.245 09:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.245 09:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:16.245 09:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.245 09:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.245 [2024-10-15 09:06:34.066812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:16.245 [2024-10-15 09:06:34.066877] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:16.245 [2024-10-15 09:06:34.066898] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:16.245 [2024-10-15 09:06:34.066910] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:16.245 [2024-10-15 09:06:34.069463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:16.245 [2024-10-15 09:06:34.069566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:16.245 BaseBdev2 00:07:16.245 09:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.245 09:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:16.245 09:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.245 09:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.245 [2024-10-15 09:06:34.078910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:16.245 [2024-10-15 09:06:34.081174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:16.245 [2024-10-15 09:06:34.081494] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:16.245 [2024-10-15 09:06:34.081563] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:16.245 [2024-10-15 09:06:34.081940] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:16.245 [2024-10-15 09:06:34.082199] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:16.246 [2024-10-15 09:06:34.082255] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:16.246 [2024-10-15 09:06:34.082547] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:16.246 09:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.246 09:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:16.246 09:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:16.246 09:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:16.246 09:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:16.246 09:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.246 09:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:16.246 09:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.246 09:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.246 09:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.246 09:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.246 09:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.246 09:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.246 09:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.246 09:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:16.246 09:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.246 09:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.246 "name": "raid_bdev1", 00:07:16.246 "uuid": "33392c3d-6d07-4365-8f25-72f2e9b2afd5", 00:07:16.246 "strip_size_kb": 64, 00:07:16.246 "state": "online", 00:07:16.246 "raid_level": "raid0", 00:07:16.246 "superblock": true, 00:07:16.246 "num_base_bdevs": 2, 00:07:16.246 "num_base_bdevs_discovered": 2, 00:07:16.246 "num_base_bdevs_operational": 2, 00:07:16.246 "base_bdevs_list": [ 00:07:16.246 { 00:07:16.246 "name": "BaseBdev1", 00:07:16.246 "uuid": "676a8b81-684b-51fc-b737-0387501fe9e2", 00:07:16.246 "is_configured": true, 00:07:16.246 "data_offset": 2048, 00:07:16.246 "data_size": 63488 00:07:16.246 }, 00:07:16.246 { 00:07:16.246 "name": "BaseBdev2", 00:07:16.246 "uuid": "2e29dafc-8398-5c86-90e0-ef15d460755c", 00:07:16.246 "is_configured": true, 00:07:16.246 "data_offset": 2048, 00:07:16.246 "data_size": 63488 00:07:16.246 } 00:07:16.246 ] 00:07:16.246 }' 00:07:16.246 09:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.246 09:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.812 09:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:16.812 09:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:16.812 [2024-10-15 09:06:34.667493] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:17.749 09:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:17.749 09:06:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.749 09:06:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.749 09:06:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.749 09:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:17.749 09:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:17.749 09:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:17.749 09:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:17.749 09:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:17.749 09:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:17.749 09:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:17.749 09:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.749 09:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:17.749 09:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.749 09:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.749 09:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.749 09:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.749 09:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.749 09:06:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.749 09:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:17.749 09:06:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.749 09:06:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.749 09:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.749 "name": "raid_bdev1", 00:07:17.749 "uuid": "33392c3d-6d07-4365-8f25-72f2e9b2afd5", 00:07:17.749 "strip_size_kb": 64, 00:07:17.749 "state": "online", 00:07:17.749 "raid_level": "raid0", 00:07:17.749 "superblock": true, 00:07:17.749 "num_base_bdevs": 2, 00:07:17.749 "num_base_bdevs_discovered": 2, 00:07:17.749 "num_base_bdevs_operational": 2, 00:07:17.749 "base_bdevs_list": [ 00:07:17.749 { 00:07:17.749 "name": "BaseBdev1", 00:07:17.749 "uuid": "676a8b81-684b-51fc-b737-0387501fe9e2", 00:07:17.749 "is_configured": true, 00:07:17.749 "data_offset": 2048, 00:07:17.749 "data_size": 63488 00:07:17.749 }, 00:07:17.749 { 00:07:17.749 "name": "BaseBdev2", 00:07:17.749 "uuid": "2e29dafc-8398-5c86-90e0-ef15d460755c", 00:07:17.749 "is_configured": true, 00:07:17.749 "data_offset": 2048, 00:07:17.749 "data_size": 63488 00:07:17.749 } 00:07:17.749 ] 00:07:17.749 }' 00:07:17.749 09:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.749 09:06:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.316 09:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:18.316 09:06:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.316 09:06:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.316 [2024-10-15 09:06:36.097284] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:18.316 [2024-10-15 09:06:36.097328] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:18.316 [2024-10-15 09:06:36.100619] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:18.316 [2024-10-15 09:06:36.100671] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:18.316 [2024-10-15 09:06:36.100716] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:18.316 [2024-10-15 09:06:36.100732] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:18.316 { 00:07:18.316 "results": [ 00:07:18.316 { 00:07:18.316 "job": "raid_bdev1", 00:07:18.316 "core_mask": "0x1", 00:07:18.316 "workload": "randrw", 00:07:18.316 "percentage": 50, 00:07:18.316 "status": "finished", 00:07:18.316 "queue_depth": 1, 00:07:18.316 "io_size": 131072, 00:07:18.316 "runtime": 1.430379, 00:07:18.316 "iops": 13931.272760576043, 00:07:18.316 "mibps": 1741.4090950720054, 00:07:18.316 "io_failed": 1, 00:07:18.316 "io_timeout": 0, 00:07:18.316 "avg_latency_us": 99.6451392699307, 00:07:18.316 "min_latency_us": 26.494323144104804, 00:07:18.316 "max_latency_us": 1845.8829694323144 00:07:18.316 } 00:07:18.316 ], 00:07:18.316 "core_count": 1 00:07:18.316 } 00:07:18.316 09:06:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.316 09:06:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61427 00:07:18.316 09:06:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 61427 ']' 00:07:18.316 09:06:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 61427 00:07:18.316 09:06:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:18.316 09:06:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:18.316 09:06:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61427 00:07:18.316 killing process with pid 61427 00:07:18.316 09:06:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:18.316 09:06:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:18.316 09:06:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61427' 00:07:18.316 09:06:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 61427 00:07:18.316 09:06:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 61427 00:07:18.316 [2024-10-15 09:06:36.147174] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:18.576 [2024-10-15 09:06:36.306477] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:19.957 09:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6anK3VZY1Q 00:07:19.957 09:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:19.957 09:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:19.957 09:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:07:19.957 09:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:19.957 ************************************ 00:07:19.957 END TEST raid_read_error_test 00:07:19.957 ************************************ 00:07:19.957 09:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:19.957 09:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:19.957 09:06:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:07:19.957 00:07:19.957 real 0m4.714s 00:07:19.957 user 0m5.699s 00:07:19.957 sys 0m0.586s 00:07:19.957 09:06:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.957 09:06:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.957 09:06:37 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:19.957 09:06:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:19.957 09:06:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.957 09:06:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:19.957 ************************************ 00:07:19.957 START TEST raid_write_error_test 00:07:19.957 ************************************ 00:07:19.957 09:06:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:07:19.957 09:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:19.957 09:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:19.957 09:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:19.957 09:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:19.957 09:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:19.957 09:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:19.957 09:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:19.957 09:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:19.957 09:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:19.957 09:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:19.957 09:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:19.957 09:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:19.957 09:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:19.957 09:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:19.957 09:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:19.957 09:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:19.957 09:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:19.957 09:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:19.957 09:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:19.957 09:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:19.957 09:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:19.957 09:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:19.957 09:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.VmTSuqFjDe 00:07:19.957 09:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61573 00:07:19.957 09:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61573 00:07:19.957 09:06:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 61573 ']' 00:07:19.957 09:06:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.957 09:06:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:19.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.957 09:06:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.957 09:06:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:19.957 09:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:19.957 09:06:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.957 [2024-10-15 09:06:37.824070] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:07:19.957 [2024-10-15 09:06:37.824201] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61573 ] 00:07:20.222 [2024-10-15 09:06:37.992220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.487 [2024-10-15 09:06:38.120899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.487 [2024-10-15 09:06:38.342212] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.487 [2024-10-15 09:06:38.342292] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.057 BaseBdev1_malloc 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.057 true 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.057 [2024-10-15 09:06:38.799512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:21.057 [2024-10-15 09:06:38.799604] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:21.057 [2024-10-15 09:06:38.799632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:21.057 [2024-10-15 09:06:38.799646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:21.057 [2024-10-15 09:06:38.802431] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:21.057 [2024-10-15 09:06:38.802484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:21.057 BaseBdev1 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.057 BaseBdev2_malloc 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.057 true 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.057 [2024-10-15 09:06:38.869573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:21.057 [2024-10-15 09:06:38.869720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:21.057 [2024-10-15 09:06:38.869749] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:21.057 [2024-10-15 09:06:38.869762] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:21.057 [2024-10-15 09:06:38.872181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:21.057 [2024-10-15 09:06:38.872228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:21.057 BaseBdev2 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.057 [2024-10-15 09:06:38.881648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:21.057 [2024-10-15 09:06:38.884042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:21.057 [2024-10-15 09:06:38.884282] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:21.057 [2024-10-15 09:06:38.884304] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:21.057 [2024-10-15 09:06:38.884653] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:21.057 [2024-10-15 09:06:38.884910] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:21.057 [2024-10-15 09:06:38.884971] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:21.057 [2024-10-15 09:06:38.885205] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.057 "name": "raid_bdev1", 00:07:21.057 "uuid": "5c80f2ed-2bb2-4cba-857f-bf0b9e9eaf1c", 00:07:21.057 "strip_size_kb": 64, 00:07:21.057 "state": "online", 00:07:21.057 "raid_level": "raid0", 00:07:21.057 "superblock": true, 00:07:21.057 "num_base_bdevs": 2, 00:07:21.057 "num_base_bdevs_discovered": 2, 00:07:21.057 "num_base_bdevs_operational": 2, 00:07:21.057 "base_bdevs_list": [ 00:07:21.057 { 00:07:21.057 "name": "BaseBdev1", 00:07:21.057 "uuid": "e98a3c25-ffb9-5fb5-b861-8f33258bcb99", 00:07:21.057 "is_configured": true, 00:07:21.057 "data_offset": 2048, 00:07:21.057 "data_size": 63488 00:07:21.057 }, 00:07:21.057 { 00:07:21.057 "name": "BaseBdev2", 00:07:21.057 "uuid": "15e13c4b-ccfc-5e8f-9fc6-ed77f00d835f", 00:07:21.057 "is_configured": true, 00:07:21.057 "data_offset": 2048, 00:07:21.057 "data_size": 63488 00:07:21.057 } 00:07:21.057 ] 00:07:21.057 }' 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.057 09:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.626 09:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:21.626 09:06:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:21.626 [2024-10-15 09:06:39.490336] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:22.562 09:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:22.562 09:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.562 09:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.562 09:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.562 09:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:22.562 09:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:22.562 09:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:22.562 09:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:22.562 09:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:22.562 09:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:22.562 09:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:22.562 09:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.562 09:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.562 09:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.562 09:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.562 09:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.562 09:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.563 09:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.563 09:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:22.563 09:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.563 09:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.563 09:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.821 09:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.821 "name": "raid_bdev1", 00:07:22.821 "uuid": "5c80f2ed-2bb2-4cba-857f-bf0b9e9eaf1c", 00:07:22.821 "strip_size_kb": 64, 00:07:22.821 "state": "online", 00:07:22.821 "raid_level": "raid0", 00:07:22.821 "superblock": true, 00:07:22.821 "num_base_bdevs": 2, 00:07:22.821 "num_base_bdevs_discovered": 2, 00:07:22.821 "num_base_bdevs_operational": 2, 00:07:22.821 "base_bdevs_list": [ 00:07:22.821 { 00:07:22.821 "name": "BaseBdev1", 00:07:22.821 "uuid": "e98a3c25-ffb9-5fb5-b861-8f33258bcb99", 00:07:22.821 "is_configured": true, 00:07:22.821 "data_offset": 2048, 00:07:22.821 "data_size": 63488 00:07:22.821 }, 00:07:22.821 { 00:07:22.821 "name": "BaseBdev2", 00:07:22.821 "uuid": "15e13c4b-ccfc-5e8f-9fc6-ed77f00d835f", 00:07:22.821 "is_configured": true, 00:07:22.821 "data_offset": 2048, 00:07:22.821 "data_size": 63488 00:07:22.821 } 00:07:22.821 ] 00:07:22.821 }' 00:07:22.821 09:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.821 09:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.080 09:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:23.080 09:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.080 09:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.080 [2024-10-15 09:06:40.899129] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:23.080 [2024-10-15 09:06:40.899256] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:23.080 [2024-10-15 09:06:40.902133] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:23.080 [2024-10-15 09:06:40.902183] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.080 [2024-10-15 09:06:40.902219] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:23.080 [2024-10-15 09:06:40.902232] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:23.080 { 00:07:23.080 "results": [ 00:07:23.080 { 00:07:23.080 "job": "raid_bdev1", 00:07:23.080 "core_mask": "0x1", 00:07:23.080 "workload": "randrw", 00:07:23.080 "percentage": 50, 00:07:23.080 "status": "finished", 00:07:23.080 "queue_depth": 1, 00:07:23.080 "io_size": 131072, 00:07:23.080 "runtime": 1.4096, 00:07:23.080 "iops": 13799.659477866062, 00:07:23.080 "mibps": 1724.9574347332577, 00:07:23.080 "io_failed": 1, 00:07:23.080 "io_timeout": 0, 00:07:23.080 "avg_latency_us": 100.56812135037377, 00:07:23.080 "min_latency_us": 28.17117903930131, 00:07:23.080 "max_latency_us": 1645.5545851528384 00:07:23.080 } 00:07:23.080 ], 00:07:23.080 "core_count": 1 00:07:23.080 } 00:07:23.080 09:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.080 09:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61573 00:07:23.080 09:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 61573 ']' 00:07:23.080 09:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 61573 00:07:23.080 09:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:23.080 09:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:23.080 09:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61573 00:07:23.080 killing process with pid 61573 00:07:23.080 09:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:23.080 09:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:23.080 09:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61573' 00:07:23.080 09:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 61573 00:07:23.080 [2024-10-15 09:06:40.950491] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:23.080 09:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 61573 00:07:23.339 [2024-10-15 09:06:41.105054] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:24.717 09:06:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.VmTSuqFjDe 00:07:24.717 09:06:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:24.717 09:06:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:24.717 09:06:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:07:24.717 09:06:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:24.717 09:06:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:24.717 09:06:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:24.717 09:06:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:07:24.717 00:07:24.717 real 0m4.711s 00:07:24.717 user 0m5.745s 00:07:24.717 sys 0m0.576s 00:07:24.717 09:06:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.717 ************************************ 00:07:24.717 END TEST raid_write_error_test 00:07:24.717 ************************************ 00:07:24.717 09:06:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.717 09:06:42 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:24.717 09:06:42 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:24.717 09:06:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:24.717 09:06:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.717 09:06:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:24.717 ************************************ 00:07:24.717 START TEST raid_state_function_test 00:07:24.717 ************************************ 00:07:24.717 09:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:07:24.717 09:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:24.717 09:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:24.717 09:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:24.717 09:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:24.717 09:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:24.717 09:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:24.717 09:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:24.717 09:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:24.717 09:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:24.717 09:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:24.717 09:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:24.717 09:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:24.717 09:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:24.717 09:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:24.717 09:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:24.717 09:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:24.717 09:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:24.717 09:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:24.717 09:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:24.717 09:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:24.717 09:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:24.717 09:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:24.717 09:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:24.717 09:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61722 00:07:24.717 09:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:24.717 09:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61722' 00:07:24.717 Process raid pid: 61722 00:07:24.717 09:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61722 00:07:24.717 09:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 61722 ']' 00:07:24.717 09:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.717 09:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.717 09:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.717 09:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.717 09:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.717 [2024-10-15 09:06:42.602196] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:07:24.717 [2024-10-15 09:06:42.602329] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:24.977 [2024-10-15 09:06:42.768178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.236 [2024-10-15 09:06:42.933024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.495 [2024-10-15 09:06:43.198409] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:25.495 [2024-10-15 09:06:43.198568] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:25.753 09:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:25.753 09:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:25.753 09:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:25.753 09:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.753 09:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.753 [2024-10-15 09:06:43.556831] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:25.753 [2024-10-15 09:06:43.556902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:25.753 [2024-10-15 09:06:43.556913] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:25.753 [2024-10-15 09:06:43.556925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:25.753 09:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.753 09:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:25.753 09:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.753 09:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:25.753 09:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:25.753 09:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.753 09:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.754 09:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.754 09:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.754 09:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.754 09:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.754 09:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.754 09:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.754 09:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.754 09:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.754 09:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.754 09:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.754 "name": "Existed_Raid", 00:07:25.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.754 "strip_size_kb": 64, 00:07:25.754 "state": "configuring", 00:07:25.754 "raid_level": "concat", 00:07:25.754 "superblock": false, 00:07:25.754 "num_base_bdevs": 2, 00:07:25.754 "num_base_bdevs_discovered": 0, 00:07:25.754 "num_base_bdevs_operational": 2, 00:07:25.754 "base_bdevs_list": [ 00:07:25.754 { 00:07:25.754 "name": "BaseBdev1", 00:07:25.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.754 "is_configured": false, 00:07:25.754 "data_offset": 0, 00:07:25.754 "data_size": 0 00:07:25.754 }, 00:07:25.754 { 00:07:25.754 "name": "BaseBdev2", 00:07:25.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.754 "is_configured": false, 00:07:25.754 "data_offset": 0, 00:07:25.754 "data_size": 0 00:07:25.754 } 00:07:25.754 ] 00:07:25.754 }' 00:07:25.754 09:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.754 09:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.323 [2024-10-15 09:06:44.027944] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:26.323 [2024-10-15 09:06:44.028059] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.323 [2024-10-15 09:06:44.035974] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:26.323 [2024-10-15 09:06:44.036030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:26.323 [2024-10-15 09:06:44.036041] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:26.323 [2024-10-15 09:06:44.036056] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.323 [2024-10-15 09:06:44.087009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:26.323 BaseBdev1 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.323 [ 00:07:26.323 { 00:07:26.323 "name": "BaseBdev1", 00:07:26.323 "aliases": [ 00:07:26.323 "11066fec-78f1-49bb-9f63-f77271f6c16f" 00:07:26.323 ], 00:07:26.323 "product_name": "Malloc disk", 00:07:26.323 "block_size": 512, 00:07:26.323 "num_blocks": 65536, 00:07:26.323 "uuid": "11066fec-78f1-49bb-9f63-f77271f6c16f", 00:07:26.323 "assigned_rate_limits": { 00:07:26.323 "rw_ios_per_sec": 0, 00:07:26.323 "rw_mbytes_per_sec": 0, 00:07:26.323 "r_mbytes_per_sec": 0, 00:07:26.323 "w_mbytes_per_sec": 0 00:07:26.323 }, 00:07:26.323 "claimed": true, 00:07:26.323 "claim_type": "exclusive_write", 00:07:26.323 "zoned": false, 00:07:26.323 "supported_io_types": { 00:07:26.323 "read": true, 00:07:26.323 "write": true, 00:07:26.323 "unmap": true, 00:07:26.323 "flush": true, 00:07:26.323 "reset": true, 00:07:26.323 "nvme_admin": false, 00:07:26.323 "nvme_io": false, 00:07:26.323 "nvme_io_md": false, 00:07:26.323 "write_zeroes": true, 00:07:26.323 "zcopy": true, 00:07:26.323 "get_zone_info": false, 00:07:26.323 "zone_management": false, 00:07:26.323 "zone_append": false, 00:07:26.323 "compare": false, 00:07:26.323 "compare_and_write": false, 00:07:26.323 "abort": true, 00:07:26.323 "seek_hole": false, 00:07:26.323 "seek_data": false, 00:07:26.323 "copy": true, 00:07:26.323 "nvme_iov_md": false 00:07:26.323 }, 00:07:26.323 "memory_domains": [ 00:07:26.323 { 00:07:26.323 "dma_device_id": "system", 00:07:26.323 "dma_device_type": 1 00:07:26.323 }, 00:07:26.323 { 00:07:26.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.323 "dma_device_type": 2 00:07:26.323 } 00:07:26.323 ], 00:07:26.323 "driver_specific": {} 00:07:26.323 } 00:07:26.323 ] 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.323 "name": "Existed_Raid", 00:07:26.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.323 "strip_size_kb": 64, 00:07:26.323 "state": "configuring", 00:07:26.323 "raid_level": "concat", 00:07:26.323 "superblock": false, 00:07:26.323 "num_base_bdevs": 2, 00:07:26.323 "num_base_bdevs_discovered": 1, 00:07:26.323 "num_base_bdevs_operational": 2, 00:07:26.323 "base_bdevs_list": [ 00:07:26.323 { 00:07:26.323 "name": "BaseBdev1", 00:07:26.323 "uuid": "11066fec-78f1-49bb-9f63-f77271f6c16f", 00:07:26.323 "is_configured": true, 00:07:26.323 "data_offset": 0, 00:07:26.323 "data_size": 65536 00:07:26.323 }, 00:07:26.323 { 00:07:26.323 "name": "BaseBdev2", 00:07:26.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.323 "is_configured": false, 00:07:26.323 "data_offset": 0, 00:07:26.323 "data_size": 0 00:07:26.323 } 00:07:26.323 ] 00:07:26.323 }' 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.323 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.891 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:26.891 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.891 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.891 [2024-10-15 09:06:44.602278] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:26.891 [2024-10-15 09:06:44.602440] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:26.891 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.891 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:26.891 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.891 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.891 [2024-10-15 09:06:44.614363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:26.891 [2024-10-15 09:06:44.616711] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:26.891 [2024-10-15 09:06:44.616832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:26.891 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.891 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:26.891 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:26.891 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:26.891 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.891 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:26.891 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:26.891 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.891 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.891 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.891 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.891 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.891 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.891 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.891 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.891 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.891 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.891 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.891 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.891 "name": "Existed_Raid", 00:07:26.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.891 "strip_size_kb": 64, 00:07:26.892 "state": "configuring", 00:07:26.892 "raid_level": "concat", 00:07:26.892 "superblock": false, 00:07:26.892 "num_base_bdevs": 2, 00:07:26.892 "num_base_bdevs_discovered": 1, 00:07:26.892 "num_base_bdevs_operational": 2, 00:07:26.892 "base_bdevs_list": [ 00:07:26.892 { 00:07:26.892 "name": "BaseBdev1", 00:07:26.892 "uuid": "11066fec-78f1-49bb-9f63-f77271f6c16f", 00:07:26.892 "is_configured": true, 00:07:26.892 "data_offset": 0, 00:07:26.892 "data_size": 65536 00:07:26.892 }, 00:07:26.892 { 00:07:26.892 "name": "BaseBdev2", 00:07:26.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.892 "is_configured": false, 00:07:26.892 "data_offset": 0, 00:07:26.892 "data_size": 0 00:07:26.892 } 00:07:26.892 ] 00:07:26.892 }' 00:07:26.892 09:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.892 09:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.459 [2024-10-15 09:06:45.147426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:27.459 [2024-10-15 09:06:45.147631] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:27.459 [2024-10-15 09:06:45.147674] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:27.459 [2024-10-15 09:06:45.148082] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:27.459 [2024-10-15 09:06:45.148325] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:27.459 [2024-10-15 09:06:45.148384] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:27.459 [2024-10-15 09:06:45.148775] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:27.459 BaseBdev2 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.459 [ 00:07:27.459 { 00:07:27.459 "name": "BaseBdev2", 00:07:27.459 "aliases": [ 00:07:27.459 "8f946ff4-e1ab-4829-92fb-e695075b4e85" 00:07:27.459 ], 00:07:27.459 "product_name": "Malloc disk", 00:07:27.459 "block_size": 512, 00:07:27.459 "num_blocks": 65536, 00:07:27.459 "uuid": "8f946ff4-e1ab-4829-92fb-e695075b4e85", 00:07:27.459 "assigned_rate_limits": { 00:07:27.459 "rw_ios_per_sec": 0, 00:07:27.459 "rw_mbytes_per_sec": 0, 00:07:27.459 "r_mbytes_per_sec": 0, 00:07:27.459 "w_mbytes_per_sec": 0 00:07:27.459 }, 00:07:27.459 "claimed": true, 00:07:27.459 "claim_type": "exclusive_write", 00:07:27.459 "zoned": false, 00:07:27.459 "supported_io_types": { 00:07:27.459 "read": true, 00:07:27.459 "write": true, 00:07:27.459 "unmap": true, 00:07:27.459 "flush": true, 00:07:27.459 "reset": true, 00:07:27.459 "nvme_admin": false, 00:07:27.459 "nvme_io": false, 00:07:27.459 "nvme_io_md": false, 00:07:27.459 "write_zeroes": true, 00:07:27.459 "zcopy": true, 00:07:27.459 "get_zone_info": false, 00:07:27.459 "zone_management": false, 00:07:27.459 "zone_append": false, 00:07:27.459 "compare": false, 00:07:27.459 "compare_and_write": false, 00:07:27.459 "abort": true, 00:07:27.459 "seek_hole": false, 00:07:27.459 "seek_data": false, 00:07:27.459 "copy": true, 00:07:27.459 "nvme_iov_md": false 00:07:27.459 }, 00:07:27.459 "memory_domains": [ 00:07:27.459 { 00:07:27.459 "dma_device_id": "system", 00:07:27.459 "dma_device_type": 1 00:07:27.459 }, 00:07:27.459 { 00:07:27.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.459 "dma_device_type": 2 00:07:27.459 } 00:07:27.459 ], 00:07:27.459 "driver_specific": {} 00:07:27.459 } 00:07:27.459 ] 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.459 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.459 "name": "Existed_Raid", 00:07:27.459 "uuid": "4c18a1a4-6181-4aa2-8ad0-86a16984b771", 00:07:27.459 "strip_size_kb": 64, 00:07:27.459 "state": "online", 00:07:27.459 "raid_level": "concat", 00:07:27.459 "superblock": false, 00:07:27.459 "num_base_bdevs": 2, 00:07:27.459 "num_base_bdevs_discovered": 2, 00:07:27.459 "num_base_bdevs_operational": 2, 00:07:27.459 "base_bdevs_list": [ 00:07:27.459 { 00:07:27.459 "name": "BaseBdev1", 00:07:27.459 "uuid": "11066fec-78f1-49bb-9f63-f77271f6c16f", 00:07:27.459 "is_configured": true, 00:07:27.459 "data_offset": 0, 00:07:27.459 "data_size": 65536 00:07:27.459 }, 00:07:27.459 { 00:07:27.459 "name": "BaseBdev2", 00:07:27.459 "uuid": "8f946ff4-e1ab-4829-92fb-e695075b4e85", 00:07:27.460 "is_configured": true, 00:07:27.460 "data_offset": 0, 00:07:27.460 "data_size": 65536 00:07:27.460 } 00:07:27.460 ] 00:07:27.460 }' 00:07:27.460 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.460 09:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.026 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:28.026 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:28.026 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:28.026 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:28.026 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:28.026 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:28.026 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:28.026 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:28.026 09:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.026 09:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.026 [2024-10-15 09:06:45.667023] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:28.026 09:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.026 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:28.026 "name": "Existed_Raid", 00:07:28.026 "aliases": [ 00:07:28.026 "4c18a1a4-6181-4aa2-8ad0-86a16984b771" 00:07:28.026 ], 00:07:28.026 "product_name": "Raid Volume", 00:07:28.026 "block_size": 512, 00:07:28.026 "num_blocks": 131072, 00:07:28.026 "uuid": "4c18a1a4-6181-4aa2-8ad0-86a16984b771", 00:07:28.026 "assigned_rate_limits": { 00:07:28.026 "rw_ios_per_sec": 0, 00:07:28.027 "rw_mbytes_per_sec": 0, 00:07:28.027 "r_mbytes_per_sec": 0, 00:07:28.027 "w_mbytes_per_sec": 0 00:07:28.027 }, 00:07:28.027 "claimed": false, 00:07:28.027 "zoned": false, 00:07:28.027 "supported_io_types": { 00:07:28.027 "read": true, 00:07:28.027 "write": true, 00:07:28.027 "unmap": true, 00:07:28.027 "flush": true, 00:07:28.027 "reset": true, 00:07:28.027 "nvme_admin": false, 00:07:28.027 "nvme_io": false, 00:07:28.027 "nvme_io_md": false, 00:07:28.027 "write_zeroes": true, 00:07:28.027 "zcopy": false, 00:07:28.027 "get_zone_info": false, 00:07:28.027 "zone_management": false, 00:07:28.027 "zone_append": false, 00:07:28.027 "compare": false, 00:07:28.027 "compare_and_write": false, 00:07:28.027 "abort": false, 00:07:28.027 "seek_hole": false, 00:07:28.027 "seek_data": false, 00:07:28.027 "copy": false, 00:07:28.027 "nvme_iov_md": false 00:07:28.027 }, 00:07:28.027 "memory_domains": [ 00:07:28.027 { 00:07:28.027 "dma_device_id": "system", 00:07:28.027 "dma_device_type": 1 00:07:28.027 }, 00:07:28.027 { 00:07:28.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.027 "dma_device_type": 2 00:07:28.027 }, 00:07:28.027 { 00:07:28.027 "dma_device_id": "system", 00:07:28.027 "dma_device_type": 1 00:07:28.027 }, 00:07:28.027 { 00:07:28.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.027 "dma_device_type": 2 00:07:28.027 } 00:07:28.027 ], 00:07:28.027 "driver_specific": { 00:07:28.027 "raid": { 00:07:28.027 "uuid": "4c18a1a4-6181-4aa2-8ad0-86a16984b771", 00:07:28.027 "strip_size_kb": 64, 00:07:28.027 "state": "online", 00:07:28.027 "raid_level": "concat", 00:07:28.027 "superblock": false, 00:07:28.027 "num_base_bdevs": 2, 00:07:28.027 "num_base_bdevs_discovered": 2, 00:07:28.027 "num_base_bdevs_operational": 2, 00:07:28.027 "base_bdevs_list": [ 00:07:28.027 { 00:07:28.027 "name": "BaseBdev1", 00:07:28.027 "uuid": "11066fec-78f1-49bb-9f63-f77271f6c16f", 00:07:28.027 "is_configured": true, 00:07:28.027 "data_offset": 0, 00:07:28.027 "data_size": 65536 00:07:28.027 }, 00:07:28.027 { 00:07:28.027 "name": "BaseBdev2", 00:07:28.027 "uuid": "8f946ff4-e1ab-4829-92fb-e695075b4e85", 00:07:28.027 "is_configured": true, 00:07:28.027 "data_offset": 0, 00:07:28.027 "data_size": 65536 00:07:28.027 } 00:07:28.027 ] 00:07:28.027 } 00:07:28.027 } 00:07:28.027 }' 00:07:28.027 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:28.027 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:28.027 BaseBdev2' 00:07:28.027 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.027 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:28.027 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:28.027 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:28.027 09:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.027 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.027 09:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.027 09:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.027 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:28.027 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:28.027 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:28.027 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:28.027 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.027 09:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.027 09:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.027 09:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.027 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:28.027 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:28.027 09:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:28.027 09:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.027 09:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.027 [2024-10-15 09:06:45.890410] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:28.027 [2024-10-15 09:06:45.890548] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:28.027 [2024-10-15 09:06:45.890653] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:28.285 09:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.285 09:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:28.285 09:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:28.285 09:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:28.285 09:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:28.285 09:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:28.285 09:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:28.285 09:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:28.285 09:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:28.285 09:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:28.285 09:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.285 09:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:28.285 09:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.285 09:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.285 09:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.285 09:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.285 09:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.285 09:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.285 09:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.285 09:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.285 09:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.285 09:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.285 "name": "Existed_Raid", 00:07:28.285 "uuid": "4c18a1a4-6181-4aa2-8ad0-86a16984b771", 00:07:28.285 "strip_size_kb": 64, 00:07:28.285 "state": "offline", 00:07:28.285 "raid_level": "concat", 00:07:28.285 "superblock": false, 00:07:28.285 "num_base_bdevs": 2, 00:07:28.285 "num_base_bdevs_discovered": 1, 00:07:28.285 "num_base_bdevs_operational": 1, 00:07:28.285 "base_bdevs_list": [ 00:07:28.285 { 00:07:28.285 "name": null, 00:07:28.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.285 "is_configured": false, 00:07:28.285 "data_offset": 0, 00:07:28.285 "data_size": 65536 00:07:28.285 }, 00:07:28.285 { 00:07:28.285 "name": "BaseBdev2", 00:07:28.285 "uuid": "8f946ff4-e1ab-4829-92fb-e695075b4e85", 00:07:28.285 "is_configured": true, 00:07:28.285 "data_offset": 0, 00:07:28.285 "data_size": 65536 00:07:28.285 } 00:07:28.285 ] 00:07:28.285 }' 00:07:28.285 09:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.285 09:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.851 09:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:28.851 09:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:28.851 09:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:28.851 09:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.851 09:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.851 09:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.851 09:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.851 09:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:28.851 09:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:28.851 09:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:28.851 09:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.851 09:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.851 [2024-10-15 09:06:46.510452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:28.851 [2024-10-15 09:06:46.510523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:28.851 09:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.851 09:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:28.851 09:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:28.851 09:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.851 09:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.851 09:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.851 09:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:28.851 09:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.851 09:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:28.851 09:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:28.851 09:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:28.851 09:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61722 00:07:28.851 09:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 61722 ']' 00:07:28.851 09:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 61722 00:07:28.851 09:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:28.851 09:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:28.851 09:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61722 00:07:28.851 killing process with pid 61722 00:07:28.851 09:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:28.851 09:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:28.851 09:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61722' 00:07:28.851 09:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 61722 00:07:28.851 [2024-10-15 09:06:46.719667] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:28.851 09:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 61722 00:07:28.851 [2024-10-15 09:06:46.739496] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:30.227 ************************************ 00:07:30.227 END TEST raid_state_function_test 00:07:30.227 ************************************ 00:07:30.227 00:07:30.227 real 0m5.535s 00:07:30.227 user 0m7.990s 00:07:30.227 sys 0m0.857s 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.227 09:06:48 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:30.227 09:06:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:30.227 09:06:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.227 09:06:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:30.227 ************************************ 00:07:30.227 START TEST raid_state_function_test_sb 00:07:30.227 ************************************ 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61975 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61975' 00:07:30.227 Process raid pid: 61975 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61975 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 61975 ']' 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.227 09:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.486 [2024-10-15 09:06:48.189517] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:07:30.486 [2024-10-15 09:06:48.189656] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.486 [2024-10-15 09:06:48.358050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.745 [2024-10-15 09:06:48.492252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.003 [2024-10-15 09:06:48.724930] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.003 [2024-10-15 09:06:48.724977] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.263 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.263 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:31.263 09:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:31.263 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.263 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.263 [2024-10-15 09:06:49.114987] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:31.263 [2024-10-15 09:06:49.115056] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:31.263 [2024-10-15 09:06:49.115068] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:31.263 [2024-10-15 09:06:49.115080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:31.263 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.263 09:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:31.263 09:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:31.263 09:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:31.263 09:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:31.263 09:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.263 09:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.263 09:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.263 09:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.263 09:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.263 09:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.263 09:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:31.263 09:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.263 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.263 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.263 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.263 09:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.263 "name": "Existed_Raid", 00:07:31.263 "uuid": "8b391655-202e-4851-9fc9-644d8f04ebcf", 00:07:31.263 "strip_size_kb": 64, 00:07:31.263 "state": "configuring", 00:07:31.263 "raid_level": "concat", 00:07:31.263 "superblock": true, 00:07:31.263 "num_base_bdevs": 2, 00:07:31.263 "num_base_bdevs_discovered": 0, 00:07:31.263 "num_base_bdevs_operational": 2, 00:07:31.263 "base_bdevs_list": [ 00:07:31.263 { 00:07:31.263 "name": "BaseBdev1", 00:07:31.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.263 "is_configured": false, 00:07:31.263 "data_offset": 0, 00:07:31.263 "data_size": 0 00:07:31.263 }, 00:07:31.263 { 00:07:31.263 "name": "BaseBdev2", 00:07:31.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.263 "is_configured": false, 00:07:31.263 "data_offset": 0, 00:07:31.263 "data_size": 0 00:07:31.263 } 00:07:31.263 ] 00:07:31.263 }' 00:07:31.263 09:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.263 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.833 09:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:31.833 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.833 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.833 [2024-10-15 09:06:49.518214] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:31.833 [2024-10-15 09:06:49.518350] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:31.833 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.833 09:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:31.833 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.833 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.833 [2024-10-15 09:06:49.530268] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:31.833 [2024-10-15 09:06:49.530337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:31.833 [2024-10-15 09:06:49.530349] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:31.834 [2024-10-15 09:06:49.530363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.834 [2024-10-15 09:06:49.588065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:31.834 BaseBdev1 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.834 [ 00:07:31.834 { 00:07:31.834 "name": "BaseBdev1", 00:07:31.834 "aliases": [ 00:07:31.834 "09266be4-3a53-41cb-aa39-d5329e39f29c" 00:07:31.834 ], 00:07:31.834 "product_name": "Malloc disk", 00:07:31.834 "block_size": 512, 00:07:31.834 "num_blocks": 65536, 00:07:31.834 "uuid": "09266be4-3a53-41cb-aa39-d5329e39f29c", 00:07:31.834 "assigned_rate_limits": { 00:07:31.834 "rw_ios_per_sec": 0, 00:07:31.834 "rw_mbytes_per_sec": 0, 00:07:31.834 "r_mbytes_per_sec": 0, 00:07:31.834 "w_mbytes_per_sec": 0 00:07:31.834 }, 00:07:31.834 "claimed": true, 00:07:31.834 "claim_type": "exclusive_write", 00:07:31.834 "zoned": false, 00:07:31.834 "supported_io_types": { 00:07:31.834 "read": true, 00:07:31.834 "write": true, 00:07:31.834 "unmap": true, 00:07:31.834 "flush": true, 00:07:31.834 "reset": true, 00:07:31.834 "nvme_admin": false, 00:07:31.834 "nvme_io": false, 00:07:31.834 "nvme_io_md": false, 00:07:31.834 "write_zeroes": true, 00:07:31.834 "zcopy": true, 00:07:31.834 "get_zone_info": false, 00:07:31.834 "zone_management": false, 00:07:31.834 "zone_append": false, 00:07:31.834 "compare": false, 00:07:31.834 "compare_and_write": false, 00:07:31.834 "abort": true, 00:07:31.834 "seek_hole": false, 00:07:31.834 "seek_data": false, 00:07:31.834 "copy": true, 00:07:31.834 "nvme_iov_md": false 00:07:31.834 }, 00:07:31.834 "memory_domains": [ 00:07:31.834 { 00:07:31.834 "dma_device_id": "system", 00:07:31.834 "dma_device_type": 1 00:07:31.834 }, 00:07:31.834 { 00:07:31.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.834 "dma_device_type": 2 00:07:31.834 } 00:07:31.834 ], 00:07:31.834 "driver_specific": {} 00:07:31.834 } 00:07:31.834 ] 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.834 "name": "Existed_Raid", 00:07:31.834 "uuid": "cde9c6ba-b410-4d6d-95f8-8e40c5f75e1a", 00:07:31.834 "strip_size_kb": 64, 00:07:31.834 "state": "configuring", 00:07:31.834 "raid_level": "concat", 00:07:31.834 "superblock": true, 00:07:31.834 "num_base_bdevs": 2, 00:07:31.834 "num_base_bdevs_discovered": 1, 00:07:31.834 "num_base_bdevs_operational": 2, 00:07:31.834 "base_bdevs_list": [ 00:07:31.834 { 00:07:31.834 "name": "BaseBdev1", 00:07:31.834 "uuid": "09266be4-3a53-41cb-aa39-d5329e39f29c", 00:07:31.834 "is_configured": true, 00:07:31.834 "data_offset": 2048, 00:07:31.834 "data_size": 63488 00:07:31.834 }, 00:07:31.834 { 00:07:31.834 "name": "BaseBdev2", 00:07:31.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.834 "is_configured": false, 00:07:31.834 "data_offset": 0, 00:07:31.834 "data_size": 0 00:07:31.834 } 00:07:31.834 ] 00:07:31.834 }' 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.834 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.402 09:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:32.402 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.402 09:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.402 [2024-10-15 09:06:49.999551] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:32.402 [2024-10-15 09:06:49.999731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:32.402 09:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.402 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:32.402 09:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.402 09:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.402 [2024-10-15 09:06:50.007626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:32.402 [2024-10-15 09:06:50.009946] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:32.402 [2024-10-15 09:06:50.010062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:32.402 09:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.402 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:32.402 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:32.402 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:32.402 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.402 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.402 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:32.402 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.402 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.402 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.402 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.402 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.402 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.402 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.402 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.402 09:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.402 09:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.402 09:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.402 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.402 "name": "Existed_Raid", 00:07:32.402 "uuid": "f0249fe7-ee71-4c75-af90-dd435c2d3441", 00:07:32.402 "strip_size_kb": 64, 00:07:32.402 "state": "configuring", 00:07:32.402 "raid_level": "concat", 00:07:32.402 "superblock": true, 00:07:32.402 "num_base_bdevs": 2, 00:07:32.402 "num_base_bdevs_discovered": 1, 00:07:32.402 "num_base_bdevs_operational": 2, 00:07:32.402 "base_bdevs_list": [ 00:07:32.402 { 00:07:32.402 "name": "BaseBdev1", 00:07:32.402 "uuid": "09266be4-3a53-41cb-aa39-d5329e39f29c", 00:07:32.402 "is_configured": true, 00:07:32.402 "data_offset": 2048, 00:07:32.402 "data_size": 63488 00:07:32.402 }, 00:07:32.402 { 00:07:32.402 "name": "BaseBdev2", 00:07:32.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.402 "is_configured": false, 00:07:32.402 "data_offset": 0, 00:07:32.402 "data_size": 0 00:07:32.402 } 00:07:32.402 ] 00:07:32.402 }' 00:07:32.402 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.402 09:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.662 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:32.662 09:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.662 09:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.663 [2024-10-15 09:06:50.490241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:32.663 [2024-10-15 09:06:50.490635] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:32.663 [2024-10-15 09:06:50.490712] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:32.663 BaseBdev2 00:07:32.663 [2024-10-15 09:06:50.491060] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:32.663 [2024-10-15 09:06:50.491288] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:32.663 [2024-10-15 09:06:50.491343] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:32.663 09:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.663 [2024-10-15 09:06:50.491565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:32.663 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:32.663 09:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:32.663 09:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:32.663 09:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:32.663 09:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:32.663 09:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:32.663 09:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:32.663 09:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.663 09:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.663 09:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.663 09:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:32.663 09:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.663 09:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.663 [ 00:07:32.663 { 00:07:32.663 "name": "BaseBdev2", 00:07:32.663 "aliases": [ 00:07:32.663 "e66493db-cf85-4f03-b2b0-ed0d93c31b92" 00:07:32.663 ], 00:07:32.663 "product_name": "Malloc disk", 00:07:32.663 "block_size": 512, 00:07:32.663 "num_blocks": 65536, 00:07:32.663 "uuid": "e66493db-cf85-4f03-b2b0-ed0d93c31b92", 00:07:32.663 "assigned_rate_limits": { 00:07:32.663 "rw_ios_per_sec": 0, 00:07:32.663 "rw_mbytes_per_sec": 0, 00:07:32.663 "r_mbytes_per_sec": 0, 00:07:32.663 "w_mbytes_per_sec": 0 00:07:32.663 }, 00:07:32.663 "claimed": true, 00:07:32.663 "claim_type": "exclusive_write", 00:07:32.663 "zoned": false, 00:07:32.663 "supported_io_types": { 00:07:32.663 "read": true, 00:07:32.663 "write": true, 00:07:32.663 "unmap": true, 00:07:32.663 "flush": true, 00:07:32.663 "reset": true, 00:07:32.663 "nvme_admin": false, 00:07:32.663 "nvme_io": false, 00:07:32.663 "nvme_io_md": false, 00:07:32.663 "write_zeroes": true, 00:07:32.663 "zcopy": true, 00:07:32.663 "get_zone_info": false, 00:07:32.663 "zone_management": false, 00:07:32.663 "zone_append": false, 00:07:32.663 "compare": false, 00:07:32.663 "compare_and_write": false, 00:07:32.663 "abort": true, 00:07:32.663 "seek_hole": false, 00:07:32.663 "seek_data": false, 00:07:32.663 "copy": true, 00:07:32.663 "nvme_iov_md": false 00:07:32.663 }, 00:07:32.663 "memory_domains": [ 00:07:32.663 { 00:07:32.663 "dma_device_id": "system", 00:07:32.663 "dma_device_type": 1 00:07:32.663 }, 00:07:32.663 { 00:07:32.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.663 "dma_device_type": 2 00:07:32.663 } 00:07:32.663 ], 00:07:32.663 "driver_specific": {} 00:07:32.663 } 00:07:32.663 ] 00:07:32.663 09:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.663 09:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:32.663 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:32.663 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:32.663 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:32.663 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.663 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:32.663 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:32.663 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.663 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.663 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.663 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.663 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.663 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.663 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.663 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.663 09:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.663 09:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.663 09:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.922 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.922 "name": "Existed_Raid", 00:07:32.922 "uuid": "f0249fe7-ee71-4c75-af90-dd435c2d3441", 00:07:32.922 "strip_size_kb": 64, 00:07:32.922 "state": "online", 00:07:32.922 "raid_level": "concat", 00:07:32.922 "superblock": true, 00:07:32.922 "num_base_bdevs": 2, 00:07:32.922 "num_base_bdevs_discovered": 2, 00:07:32.922 "num_base_bdevs_operational": 2, 00:07:32.922 "base_bdevs_list": [ 00:07:32.922 { 00:07:32.922 "name": "BaseBdev1", 00:07:32.922 "uuid": "09266be4-3a53-41cb-aa39-d5329e39f29c", 00:07:32.922 "is_configured": true, 00:07:32.922 "data_offset": 2048, 00:07:32.922 "data_size": 63488 00:07:32.922 }, 00:07:32.922 { 00:07:32.922 "name": "BaseBdev2", 00:07:32.922 "uuid": "e66493db-cf85-4f03-b2b0-ed0d93c31b92", 00:07:32.922 "is_configured": true, 00:07:32.922 "data_offset": 2048, 00:07:32.922 "data_size": 63488 00:07:32.922 } 00:07:32.922 ] 00:07:32.922 }' 00:07:32.922 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.922 09:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.181 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:33.181 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:33.181 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:33.181 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:33.181 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:33.181 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:33.181 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:33.181 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:33.181 09:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.181 09:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.181 [2024-10-15 09:06:50.945912] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.181 09:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.181 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:33.181 "name": "Existed_Raid", 00:07:33.181 "aliases": [ 00:07:33.181 "f0249fe7-ee71-4c75-af90-dd435c2d3441" 00:07:33.181 ], 00:07:33.181 "product_name": "Raid Volume", 00:07:33.181 "block_size": 512, 00:07:33.181 "num_blocks": 126976, 00:07:33.181 "uuid": "f0249fe7-ee71-4c75-af90-dd435c2d3441", 00:07:33.181 "assigned_rate_limits": { 00:07:33.181 "rw_ios_per_sec": 0, 00:07:33.181 "rw_mbytes_per_sec": 0, 00:07:33.181 "r_mbytes_per_sec": 0, 00:07:33.181 "w_mbytes_per_sec": 0 00:07:33.181 }, 00:07:33.181 "claimed": false, 00:07:33.181 "zoned": false, 00:07:33.181 "supported_io_types": { 00:07:33.181 "read": true, 00:07:33.181 "write": true, 00:07:33.181 "unmap": true, 00:07:33.181 "flush": true, 00:07:33.181 "reset": true, 00:07:33.181 "nvme_admin": false, 00:07:33.181 "nvme_io": false, 00:07:33.181 "nvme_io_md": false, 00:07:33.181 "write_zeroes": true, 00:07:33.181 "zcopy": false, 00:07:33.181 "get_zone_info": false, 00:07:33.181 "zone_management": false, 00:07:33.181 "zone_append": false, 00:07:33.181 "compare": false, 00:07:33.181 "compare_and_write": false, 00:07:33.181 "abort": false, 00:07:33.181 "seek_hole": false, 00:07:33.181 "seek_data": false, 00:07:33.181 "copy": false, 00:07:33.181 "nvme_iov_md": false 00:07:33.181 }, 00:07:33.181 "memory_domains": [ 00:07:33.181 { 00:07:33.181 "dma_device_id": "system", 00:07:33.181 "dma_device_type": 1 00:07:33.181 }, 00:07:33.181 { 00:07:33.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.181 "dma_device_type": 2 00:07:33.181 }, 00:07:33.181 { 00:07:33.181 "dma_device_id": "system", 00:07:33.181 "dma_device_type": 1 00:07:33.181 }, 00:07:33.181 { 00:07:33.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.181 "dma_device_type": 2 00:07:33.181 } 00:07:33.181 ], 00:07:33.181 "driver_specific": { 00:07:33.181 "raid": { 00:07:33.181 "uuid": "f0249fe7-ee71-4c75-af90-dd435c2d3441", 00:07:33.181 "strip_size_kb": 64, 00:07:33.181 "state": "online", 00:07:33.181 "raid_level": "concat", 00:07:33.181 "superblock": true, 00:07:33.181 "num_base_bdevs": 2, 00:07:33.181 "num_base_bdevs_discovered": 2, 00:07:33.181 "num_base_bdevs_operational": 2, 00:07:33.181 "base_bdevs_list": [ 00:07:33.181 { 00:07:33.181 "name": "BaseBdev1", 00:07:33.181 "uuid": "09266be4-3a53-41cb-aa39-d5329e39f29c", 00:07:33.181 "is_configured": true, 00:07:33.181 "data_offset": 2048, 00:07:33.181 "data_size": 63488 00:07:33.181 }, 00:07:33.181 { 00:07:33.181 "name": "BaseBdev2", 00:07:33.181 "uuid": "e66493db-cf85-4f03-b2b0-ed0d93c31b92", 00:07:33.181 "is_configured": true, 00:07:33.181 "data_offset": 2048, 00:07:33.181 "data_size": 63488 00:07:33.181 } 00:07:33.181 ] 00:07:33.181 } 00:07:33.181 } 00:07:33.181 }' 00:07:33.181 09:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:33.181 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:33.181 BaseBdev2' 00:07:33.181 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.181 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:33.181 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.440 [2024-10-15 09:06:51.169599] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:33.440 [2024-10-15 09:06:51.169750] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:33.440 [2024-10-15 09:06:51.169878] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.440 09:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.697 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.697 "name": "Existed_Raid", 00:07:33.697 "uuid": "f0249fe7-ee71-4c75-af90-dd435c2d3441", 00:07:33.697 "strip_size_kb": 64, 00:07:33.697 "state": "offline", 00:07:33.697 "raid_level": "concat", 00:07:33.697 "superblock": true, 00:07:33.697 "num_base_bdevs": 2, 00:07:33.697 "num_base_bdevs_discovered": 1, 00:07:33.697 "num_base_bdevs_operational": 1, 00:07:33.697 "base_bdevs_list": [ 00:07:33.697 { 00:07:33.697 "name": null, 00:07:33.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.698 "is_configured": false, 00:07:33.698 "data_offset": 0, 00:07:33.698 "data_size": 63488 00:07:33.698 }, 00:07:33.698 { 00:07:33.698 "name": "BaseBdev2", 00:07:33.698 "uuid": "e66493db-cf85-4f03-b2b0-ed0d93c31b92", 00:07:33.698 "is_configured": true, 00:07:33.698 "data_offset": 2048, 00:07:33.698 "data_size": 63488 00:07:33.698 } 00:07:33.698 ] 00:07:33.698 }' 00:07:33.698 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.698 09:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.956 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:33.956 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:33.956 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.956 09:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.956 09:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.956 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:33.956 09:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.956 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:33.956 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:33.956 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:33.956 09:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.956 09:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.956 [2024-10-15 09:06:51.762042] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:33.956 [2024-10-15 09:06:51.762126] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:34.215 09:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.215 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:34.215 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:34.215 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:34.215 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.215 09:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.215 09:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.215 09:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.215 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:34.215 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:34.215 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:34.215 09:06:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61975 00:07:34.215 09:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 61975 ']' 00:07:34.215 09:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 61975 00:07:34.215 09:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:34.215 09:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.215 09:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61975 00:07:34.215 killing process with pid 61975 00:07:34.215 09:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:34.215 09:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:34.215 09:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61975' 00:07:34.215 09:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 61975 00:07:34.215 09:06:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 61975 00:07:34.215 [2024-10-15 09:06:51.964443] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:34.215 [2024-10-15 09:06:51.985713] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:35.593 09:06:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:35.593 00:07:35.593 real 0m5.215s 00:07:35.593 user 0m7.399s 00:07:35.593 sys 0m0.746s 00:07:35.593 ************************************ 00:07:35.593 END TEST raid_state_function_test_sb 00:07:35.593 ************************************ 00:07:35.593 09:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.593 09:06:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.593 09:06:53 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:35.593 09:06:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:35.593 09:06:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.593 09:06:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:35.593 ************************************ 00:07:35.593 START TEST raid_superblock_test 00:07:35.593 ************************************ 00:07:35.593 09:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:07:35.593 09:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:35.593 09:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:35.593 09:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:35.593 09:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:35.593 09:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:35.593 09:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:35.593 09:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:35.593 09:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:35.593 09:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:35.593 09:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:35.593 09:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:35.593 09:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:35.593 09:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:35.593 09:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:35.593 09:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:35.593 09:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:35.593 09:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62227 00:07:35.593 09:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62227 00:07:35.593 09:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:35.593 09:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 62227 ']' 00:07:35.593 09:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.593 09:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.593 09:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.593 09:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.593 09:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.593 [2024-10-15 09:06:53.474181] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:07:35.593 [2024-10-15 09:06:53.474321] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62227 ] 00:07:35.851 [2024-10-15 09:06:53.628527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.110 [2024-10-15 09:06:53.765183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.368 [2024-10-15 09:06:54.009639] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.368 [2024-10-15 09:06:54.009719] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.627 09:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.627 09:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:36.627 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:36.627 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:36.627 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:36.627 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:36.627 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:36.627 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:36.627 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:36.627 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:36.627 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:36.627 09:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.627 09:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.627 malloc1 00:07:36.627 09:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.627 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:36.627 09:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.627 09:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.627 [2024-10-15 09:06:54.483745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:36.627 [2024-10-15 09:06:54.483928] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:36.627 [2024-10-15 09:06:54.483994] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:36.627 [2024-10-15 09:06:54.484057] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:36.627 [2024-10-15 09:06:54.486732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:36.627 [2024-10-15 09:06:54.486842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:36.627 pt1 00:07:36.627 09:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.627 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:36.627 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:36.627 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:36.627 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:36.627 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:36.627 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:36.627 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:36.627 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:36.627 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:36.627 09:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.627 09:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.886 malloc2 00:07:36.886 09:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.886 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:36.886 09:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.886 09:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.886 [2024-10-15 09:06:54.550606] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:36.886 [2024-10-15 09:06:54.550790] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:36.886 [2024-10-15 09:06:54.550828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:36.886 [2024-10-15 09:06:54.550839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:36.886 [2024-10-15 09:06:54.553406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:36.886 [2024-10-15 09:06:54.553449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:36.886 pt2 00:07:36.886 09:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.886 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:36.886 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:36.886 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:36.886 09:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.886 09:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.886 [2024-10-15 09:06:54.562678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:36.886 [2024-10-15 09:06:54.564936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:36.886 [2024-10-15 09:06:54.565160] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:36.886 [2024-10-15 09:06:54.565178] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:36.886 [2024-10-15 09:06:54.565533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:36.886 [2024-10-15 09:06:54.565731] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:36.886 [2024-10-15 09:06:54.565746] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:36.886 [2024-10-15 09:06:54.565978] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:36.886 09:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.886 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:36.886 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:36.886 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:36.886 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:36.886 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.886 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.886 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.886 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.886 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.886 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.886 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.886 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:36.886 09:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.886 09:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.886 09:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.886 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.886 "name": "raid_bdev1", 00:07:36.886 "uuid": "4d3f2867-b478-4846-8f4b-a7b0325f8927", 00:07:36.886 "strip_size_kb": 64, 00:07:36.886 "state": "online", 00:07:36.886 "raid_level": "concat", 00:07:36.886 "superblock": true, 00:07:36.886 "num_base_bdevs": 2, 00:07:36.886 "num_base_bdevs_discovered": 2, 00:07:36.886 "num_base_bdevs_operational": 2, 00:07:36.886 "base_bdevs_list": [ 00:07:36.886 { 00:07:36.886 "name": "pt1", 00:07:36.886 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:36.886 "is_configured": true, 00:07:36.886 "data_offset": 2048, 00:07:36.886 "data_size": 63488 00:07:36.886 }, 00:07:36.886 { 00:07:36.886 "name": "pt2", 00:07:36.886 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:36.886 "is_configured": true, 00:07:36.886 "data_offset": 2048, 00:07:36.886 "data_size": 63488 00:07:36.886 } 00:07:36.886 ] 00:07:36.886 }' 00:07:36.886 09:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.886 09:06:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.454 [2024-10-15 09:06:55.054160] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:37.454 "name": "raid_bdev1", 00:07:37.454 "aliases": [ 00:07:37.454 "4d3f2867-b478-4846-8f4b-a7b0325f8927" 00:07:37.454 ], 00:07:37.454 "product_name": "Raid Volume", 00:07:37.454 "block_size": 512, 00:07:37.454 "num_blocks": 126976, 00:07:37.454 "uuid": "4d3f2867-b478-4846-8f4b-a7b0325f8927", 00:07:37.454 "assigned_rate_limits": { 00:07:37.454 "rw_ios_per_sec": 0, 00:07:37.454 "rw_mbytes_per_sec": 0, 00:07:37.454 "r_mbytes_per_sec": 0, 00:07:37.454 "w_mbytes_per_sec": 0 00:07:37.454 }, 00:07:37.454 "claimed": false, 00:07:37.454 "zoned": false, 00:07:37.454 "supported_io_types": { 00:07:37.454 "read": true, 00:07:37.454 "write": true, 00:07:37.454 "unmap": true, 00:07:37.454 "flush": true, 00:07:37.454 "reset": true, 00:07:37.454 "nvme_admin": false, 00:07:37.454 "nvme_io": false, 00:07:37.454 "nvme_io_md": false, 00:07:37.454 "write_zeroes": true, 00:07:37.454 "zcopy": false, 00:07:37.454 "get_zone_info": false, 00:07:37.454 "zone_management": false, 00:07:37.454 "zone_append": false, 00:07:37.454 "compare": false, 00:07:37.454 "compare_and_write": false, 00:07:37.454 "abort": false, 00:07:37.454 "seek_hole": false, 00:07:37.454 "seek_data": false, 00:07:37.454 "copy": false, 00:07:37.454 "nvme_iov_md": false 00:07:37.454 }, 00:07:37.454 "memory_domains": [ 00:07:37.454 { 00:07:37.454 "dma_device_id": "system", 00:07:37.454 "dma_device_type": 1 00:07:37.454 }, 00:07:37.454 { 00:07:37.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.454 "dma_device_type": 2 00:07:37.454 }, 00:07:37.454 { 00:07:37.454 "dma_device_id": "system", 00:07:37.454 "dma_device_type": 1 00:07:37.454 }, 00:07:37.454 { 00:07:37.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.454 "dma_device_type": 2 00:07:37.454 } 00:07:37.454 ], 00:07:37.454 "driver_specific": { 00:07:37.454 "raid": { 00:07:37.454 "uuid": "4d3f2867-b478-4846-8f4b-a7b0325f8927", 00:07:37.454 "strip_size_kb": 64, 00:07:37.454 "state": "online", 00:07:37.454 "raid_level": "concat", 00:07:37.454 "superblock": true, 00:07:37.454 "num_base_bdevs": 2, 00:07:37.454 "num_base_bdevs_discovered": 2, 00:07:37.454 "num_base_bdevs_operational": 2, 00:07:37.454 "base_bdevs_list": [ 00:07:37.454 { 00:07:37.454 "name": "pt1", 00:07:37.454 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:37.454 "is_configured": true, 00:07:37.454 "data_offset": 2048, 00:07:37.454 "data_size": 63488 00:07:37.454 }, 00:07:37.454 { 00:07:37.454 "name": "pt2", 00:07:37.454 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:37.454 "is_configured": true, 00:07:37.454 "data_offset": 2048, 00:07:37.454 "data_size": 63488 00:07:37.454 } 00:07:37.454 ] 00:07:37.454 } 00:07:37.454 } 00:07:37.454 }' 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:37.454 pt2' 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.454 [2024-10-15 09:06:55.273891] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4d3f2867-b478-4846-8f4b-a7b0325f8927 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4d3f2867-b478-4846-8f4b-a7b0325f8927 ']' 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.454 [2024-10-15 09:06:55.317526] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:37.454 [2024-10-15 09:06:55.317615] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:37.454 [2024-10-15 09:06:55.317770] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:37.454 [2024-10-15 09:06:55.317862] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:37.454 [2024-10-15 09:06:55.317920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.454 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.714 [2024-10-15 09:06:55.453614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:37.714 [2024-10-15 09:06:55.455885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:37.714 [2024-10-15 09:06:55.455979] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:37.714 [2024-10-15 09:06:55.456046] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:37.714 [2024-10-15 09:06:55.456065] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:37.714 [2024-10-15 09:06:55.456081] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:37.714 request: 00:07:37.714 { 00:07:37.714 "name": "raid_bdev1", 00:07:37.714 "raid_level": "concat", 00:07:37.714 "base_bdevs": [ 00:07:37.714 "malloc1", 00:07:37.714 "malloc2" 00:07:37.714 ], 00:07:37.714 "strip_size_kb": 64, 00:07:37.714 "superblock": false, 00:07:37.714 "method": "bdev_raid_create", 00:07:37.714 "req_id": 1 00:07:37.714 } 00:07:37.714 Got JSON-RPC error response 00:07:37.714 response: 00:07:37.714 { 00:07:37.714 "code": -17, 00:07:37.714 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:37.714 } 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.714 [2024-10-15 09:06:55.521539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:37.714 [2024-10-15 09:06:55.521732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:37.714 [2024-10-15 09:06:55.521791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:37.714 [2024-10-15 09:06:55.521831] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:37.714 [2024-10-15 09:06:55.524442] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:37.714 [2024-10-15 09:06:55.524559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:37.714 [2024-10-15 09:06:55.524725] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:37.714 [2024-10-15 09:06:55.524848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:37.714 pt1 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.714 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.715 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:37.715 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.715 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.715 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.715 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.715 "name": "raid_bdev1", 00:07:37.715 "uuid": "4d3f2867-b478-4846-8f4b-a7b0325f8927", 00:07:37.715 "strip_size_kb": 64, 00:07:37.715 "state": "configuring", 00:07:37.715 "raid_level": "concat", 00:07:37.715 "superblock": true, 00:07:37.715 "num_base_bdevs": 2, 00:07:37.715 "num_base_bdevs_discovered": 1, 00:07:37.715 "num_base_bdevs_operational": 2, 00:07:37.715 "base_bdevs_list": [ 00:07:37.715 { 00:07:37.715 "name": "pt1", 00:07:37.715 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:37.715 "is_configured": true, 00:07:37.715 "data_offset": 2048, 00:07:37.715 "data_size": 63488 00:07:37.715 }, 00:07:37.715 { 00:07:37.715 "name": null, 00:07:37.715 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:37.715 "is_configured": false, 00:07:37.715 "data_offset": 2048, 00:07:37.715 "data_size": 63488 00:07:37.715 } 00:07:37.715 ] 00:07:37.715 }' 00:07:37.715 09:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.715 09:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.281 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:38.281 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:38.281 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:38.281 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:38.281 09:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.281 09:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.281 [2024-10-15 09:06:56.025527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:38.281 [2024-10-15 09:06:56.025624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:38.281 [2024-10-15 09:06:56.025653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:38.281 [2024-10-15 09:06:56.025667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:38.281 [2024-10-15 09:06:56.026285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:38.281 [2024-10-15 09:06:56.026330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:38.281 [2024-10-15 09:06:56.026437] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:38.281 [2024-10-15 09:06:56.026469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:38.281 [2024-10-15 09:06:56.026598] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:38.281 [2024-10-15 09:06:56.026617] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:38.281 [2024-10-15 09:06:56.026935] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:38.281 [2024-10-15 09:06:56.027125] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:38.281 [2024-10-15 09:06:56.027138] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:38.281 [2024-10-15 09:06:56.027315] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:38.281 pt2 00:07:38.281 09:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.281 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:38.281 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:38.281 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:38.281 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:38.281 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:38.281 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:38.281 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.281 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:38.281 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.281 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.281 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.281 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.281 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.281 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:38.281 09:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.281 09:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.281 09:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.281 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.281 "name": "raid_bdev1", 00:07:38.281 "uuid": "4d3f2867-b478-4846-8f4b-a7b0325f8927", 00:07:38.281 "strip_size_kb": 64, 00:07:38.281 "state": "online", 00:07:38.281 "raid_level": "concat", 00:07:38.281 "superblock": true, 00:07:38.281 "num_base_bdevs": 2, 00:07:38.281 "num_base_bdevs_discovered": 2, 00:07:38.281 "num_base_bdevs_operational": 2, 00:07:38.281 "base_bdevs_list": [ 00:07:38.281 { 00:07:38.281 "name": "pt1", 00:07:38.281 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:38.281 "is_configured": true, 00:07:38.281 "data_offset": 2048, 00:07:38.281 "data_size": 63488 00:07:38.281 }, 00:07:38.281 { 00:07:38.281 "name": "pt2", 00:07:38.281 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:38.281 "is_configured": true, 00:07:38.281 "data_offset": 2048, 00:07:38.281 "data_size": 63488 00:07:38.281 } 00:07:38.281 ] 00:07:38.281 }' 00:07:38.281 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.281 09:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.848 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:38.848 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:38.848 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:38.848 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:38.848 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:38.848 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:38.848 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:38.848 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:38.848 09:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.848 09:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.848 [2024-10-15 09:06:56.509880] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:38.848 09:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.848 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:38.848 "name": "raid_bdev1", 00:07:38.848 "aliases": [ 00:07:38.848 "4d3f2867-b478-4846-8f4b-a7b0325f8927" 00:07:38.848 ], 00:07:38.848 "product_name": "Raid Volume", 00:07:38.848 "block_size": 512, 00:07:38.848 "num_blocks": 126976, 00:07:38.848 "uuid": "4d3f2867-b478-4846-8f4b-a7b0325f8927", 00:07:38.848 "assigned_rate_limits": { 00:07:38.848 "rw_ios_per_sec": 0, 00:07:38.848 "rw_mbytes_per_sec": 0, 00:07:38.848 "r_mbytes_per_sec": 0, 00:07:38.848 "w_mbytes_per_sec": 0 00:07:38.848 }, 00:07:38.848 "claimed": false, 00:07:38.848 "zoned": false, 00:07:38.848 "supported_io_types": { 00:07:38.848 "read": true, 00:07:38.848 "write": true, 00:07:38.848 "unmap": true, 00:07:38.848 "flush": true, 00:07:38.848 "reset": true, 00:07:38.848 "nvme_admin": false, 00:07:38.848 "nvme_io": false, 00:07:38.848 "nvme_io_md": false, 00:07:38.848 "write_zeroes": true, 00:07:38.848 "zcopy": false, 00:07:38.848 "get_zone_info": false, 00:07:38.848 "zone_management": false, 00:07:38.848 "zone_append": false, 00:07:38.848 "compare": false, 00:07:38.848 "compare_and_write": false, 00:07:38.848 "abort": false, 00:07:38.848 "seek_hole": false, 00:07:38.848 "seek_data": false, 00:07:38.848 "copy": false, 00:07:38.848 "nvme_iov_md": false 00:07:38.848 }, 00:07:38.848 "memory_domains": [ 00:07:38.848 { 00:07:38.848 "dma_device_id": "system", 00:07:38.848 "dma_device_type": 1 00:07:38.848 }, 00:07:38.848 { 00:07:38.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.848 "dma_device_type": 2 00:07:38.848 }, 00:07:38.848 { 00:07:38.848 "dma_device_id": "system", 00:07:38.848 "dma_device_type": 1 00:07:38.848 }, 00:07:38.848 { 00:07:38.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.848 "dma_device_type": 2 00:07:38.848 } 00:07:38.848 ], 00:07:38.848 "driver_specific": { 00:07:38.848 "raid": { 00:07:38.848 "uuid": "4d3f2867-b478-4846-8f4b-a7b0325f8927", 00:07:38.848 "strip_size_kb": 64, 00:07:38.848 "state": "online", 00:07:38.848 "raid_level": "concat", 00:07:38.848 "superblock": true, 00:07:38.848 "num_base_bdevs": 2, 00:07:38.848 "num_base_bdevs_discovered": 2, 00:07:38.848 "num_base_bdevs_operational": 2, 00:07:38.848 "base_bdevs_list": [ 00:07:38.848 { 00:07:38.848 "name": "pt1", 00:07:38.848 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:38.848 "is_configured": true, 00:07:38.848 "data_offset": 2048, 00:07:38.848 "data_size": 63488 00:07:38.848 }, 00:07:38.848 { 00:07:38.848 "name": "pt2", 00:07:38.848 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:38.848 "is_configured": true, 00:07:38.848 "data_offset": 2048, 00:07:38.848 "data_size": 63488 00:07:38.848 } 00:07:38.848 ] 00:07:38.848 } 00:07:38.848 } 00:07:38.848 }' 00:07:38.848 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:38.848 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:38.848 pt2' 00:07:38.848 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.848 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:38.848 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.848 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:38.848 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.848 09:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.848 09:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.848 09:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.848 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.848 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.849 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.849 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:38.849 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.849 09:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.849 09:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.849 09:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.107 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:39.107 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:39.107 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:39.107 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:39.107 09:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.107 09:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.107 [2024-10-15 09:06:56.757877] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:39.107 09:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.107 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4d3f2867-b478-4846-8f4b-a7b0325f8927 '!=' 4d3f2867-b478-4846-8f4b-a7b0325f8927 ']' 00:07:39.107 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:39.107 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:39.107 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:39.107 09:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62227 00:07:39.107 09:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 62227 ']' 00:07:39.107 09:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 62227 00:07:39.107 09:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:39.107 09:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:39.107 09:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62227 00:07:39.107 09:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:39.107 09:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:39.107 09:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62227' 00:07:39.107 killing process with pid 62227 00:07:39.107 09:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 62227 00:07:39.107 09:06:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 62227 00:07:39.107 [2024-10-15 09:06:56.844981] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:39.107 [2024-10-15 09:06:56.845116] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:39.107 [2024-10-15 09:06:56.845270] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:39.107 [2024-10-15 09:06:56.845364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:39.365 [2024-10-15 09:06:57.106094] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:40.737 09:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:40.737 00:07:40.737 real 0m5.107s 00:07:40.737 user 0m7.169s 00:07:40.737 sys 0m0.728s 00:07:40.737 09:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.737 09:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.737 ************************************ 00:07:40.737 END TEST raid_superblock_test 00:07:40.737 ************************************ 00:07:40.737 09:06:58 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:40.737 09:06:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:40.737 09:06:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.737 09:06:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:40.737 ************************************ 00:07:40.737 START TEST raid_read_error_test 00:07:40.737 ************************************ 00:07:40.737 09:06:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:07:40.737 09:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:40.737 09:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:40.737 09:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:40.738 09:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:40.738 09:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:40.738 09:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:40.738 09:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:40.738 09:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:40.738 09:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:40.738 09:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:40.738 09:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:40.738 09:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:40.738 09:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:40.738 09:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:40.738 09:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:40.738 09:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:40.738 09:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:40.738 09:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:40.738 09:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:40.738 09:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:40.738 09:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:40.738 09:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:40.738 09:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.dBnqWwIBe6 00:07:40.738 09:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62444 00:07:40.738 09:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:40.738 09:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62444 00:07:40.738 09:06:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 62444 ']' 00:07:40.738 09:06:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.738 09:06:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:40.738 09:06:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.738 09:06:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:40.738 09:06:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.996 [2024-10-15 09:06:58.665980] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:07:40.996 [2024-10-15 09:06:58.666202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62444 ] 00:07:40.996 [2024-10-15 09:06:58.846466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.254 [2024-10-15 09:06:59.030034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.511 [2024-10-15 09:06:59.287210] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.511 [2024-10-15 09:06:59.287296] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.076 BaseBdev1_malloc 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.076 true 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.076 [2024-10-15 09:06:59.810192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:42.076 [2024-10-15 09:06:59.810363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:42.076 [2024-10-15 09:06:59.810400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:42.076 [2024-10-15 09:06:59.810415] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:42.076 [2024-10-15 09:06:59.813177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:42.076 [2024-10-15 09:06:59.813250] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:42.076 BaseBdev1 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.076 BaseBdev2_malloc 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.076 true 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.076 [2024-10-15 09:06:59.879190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:42.076 [2024-10-15 09:06:59.879284] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:42.076 [2024-10-15 09:06:59.879311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:42.076 [2024-10-15 09:06:59.879324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:42.076 [2024-10-15 09:06:59.882156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:42.076 [2024-10-15 09:06:59.882234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:42.076 BaseBdev2 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.076 [2024-10-15 09:06:59.891366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:42.076 [2024-10-15 09:06:59.893775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:42.076 [2024-10-15 09:06:59.894069] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:42.076 [2024-10-15 09:06:59.894093] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:42.076 [2024-10-15 09:06:59.894443] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:42.076 [2024-10-15 09:06:59.894648] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:42.076 [2024-10-15 09:06:59.894665] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:42.076 [2024-10-15 09:06:59.894927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.076 "name": "raid_bdev1", 00:07:42.076 "uuid": "0bbb0529-d72b-4538-a5dd-3e83d911f423", 00:07:42.076 "strip_size_kb": 64, 00:07:42.076 "state": "online", 00:07:42.076 "raid_level": "concat", 00:07:42.076 "superblock": true, 00:07:42.076 "num_base_bdevs": 2, 00:07:42.076 "num_base_bdevs_discovered": 2, 00:07:42.076 "num_base_bdevs_operational": 2, 00:07:42.076 "base_bdevs_list": [ 00:07:42.076 { 00:07:42.076 "name": "BaseBdev1", 00:07:42.076 "uuid": "b32c7844-be28-5069-a70f-dc72b0e1e4aa", 00:07:42.076 "is_configured": true, 00:07:42.076 "data_offset": 2048, 00:07:42.076 "data_size": 63488 00:07:42.076 }, 00:07:42.076 { 00:07:42.076 "name": "BaseBdev2", 00:07:42.076 "uuid": "004550b1-a5ec-53c8-87aa-7340377d1599", 00:07:42.076 "is_configured": true, 00:07:42.076 "data_offset": 2048, 00:07:42.076 "data_size": 63488 00:07:42.076 } 00:07:42.076 ] 00:07:42.076 }' 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.076 09:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.642 09:07:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:42.642 09:07:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:42.642 [2024-10-15 09:07:00.488306] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:43.682 09:07:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:43.682 09:07:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.682 09:07:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.682 09:07:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.682 09:07:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:43.682 09:07:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:43.682 09:07:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:43.682 09:07:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:43.682 09:07:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:43.682 09:07:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:43.682 09:07:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:43.682 09:07:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.682 09:07:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.682 09:07:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.682 09:07:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.682 09:07:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.682 09:07:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.682 09:07:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.682 09:07:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:43.682 09:07:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.682 09:07:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.682 09:07:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.682 09:07:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.682 "name": "raid_bdev1", 00:07:43.682 "uuid": "0bbb0529-d72b-4538-a5dd-3e83d911f423", 00:07:43.682 "strip_size_kb": 64, 00:07:43.682 "state": "online", 00:07:43.682 "raid_level": "concat", 00:07:43.682 "superblock": true, 00:07:43.682 "num_base_bdevs": 2, 00:07:43.682 "num_base_bdevs_discovered": 2, 00:07:43.682 "num_base_bdevs_operational": 2, 00:07:43.682 "base_bdevs_list": [ 00:07:43.682 { 00:07:43.682 "name": "BaseBdev1", 00:07:43.682 "uuid": "b32c7844-be28-5069-a70f-dc72b0e1e4aa", 00:07:43.682 "is_configured": true, 00:07:43.682 "data_offset": 2048, 00:07:43.682 "data_size": 63488 00:07:43.682 }, 00:07:43.682 { 00:07:43.682 "name": "BaseBdev2", 00:07:43.682 "uuid": "004550b1-a5ec-53c8-87aa-7340377d1599", 00:07:43.682 "is_configured": true, 00:07:43.682 "data_offset": 2048, 00:07:43.682 "data_size": 63488 00:07:43.682 } 00:07:43.682 ] 00:07:43.682 }' 00:07:43.682 09:07:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.682 09:07:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.248 09:07:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:44.248 09:07:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.248 09:07:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.248 [2024-10-15 09:07:01.866171] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:44.248 [2024-10-15 09:07:01.866370] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:44.248 [2024-10-15 09:07:01.870350] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:44.248 [2024-10-15 09:07:01.870607] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:44.248 [2024-10-15 09:07:01.870739] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:44.248 [2024-10-15 09:07:01.870845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:44.248 { 00:07:44.248 "results": [ 00:07:44.248 { 00:07:44.248 "job": "raid_bdev1", 00:07:44.248 "core_mask": "0x1", 00:07:44.248 "workload": "randrw", 00:07:44.248 "percentage": 50, 00:07:44.248 "status": "finished", 00:07:44.248 "queue_depth": 1, 00:07:44.248 "io_size": 131072, 00:07:44.248 "runtime": 1.378262, 00:07:44.248 "iops": 12343.806910442281, 00:07:44.248 "mibps": 1542.9758638052851, 00:07:44.248 "io_failed": 1, 00:07:44.248 "io_timeout": 0, 00:07:44.248 "avg_latency_us": 112.68101696881531, 00:07:44.248 "min_latency_us": 33.760698689956335, 00:07:44.248 "max_latency_us": 1831.5737991266376 00:07:44.248 } 00:07:44.248 ], 00:07:44.248 "core_count": 1 00:07:44.248 } 00:07:44.248 09:07:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.248 09:07:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62444 00:07:44.248 09:07:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 62444 ']' 00:07:44.249 09:07:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 62444 00:07:44.249 09:07:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:44.249 09:07:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:44.249 09:07:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62444 00:07:44.249 09:07:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:44.249 09:07:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:44.249 09:07:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62444' 00:07:44.249 killing process with pid 62444 00:07:44.249 09:07:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 62444 00:07:44.249 09:07:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 62444 00:07:44.249 [2024-10-15 09:07:01.906100] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:44.249 [2024-10-15 09:07:02.077804] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:45.620 09:07:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.dBnqWwIBe6 00:07:45.620 09:07:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:45.620 09:07:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:45.620 09:07:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:45.620 09:07:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:45.620 09:07:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:45.620 09:07:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:45.620 09:07:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:45.620 00:07:45.620 real 0m4.979s 00:07:45.620 user 0m6.153s 00:07:45.620 sys 0m0.562s 00:07:45.620 09:07:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.620 09:07:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.620 ************************************ 00:07:45.620 END TEST raid_read_error_test 00:07:45.620 ************************************ 00:07:45.878 09:07:03 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:45.878 09:07:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:45.878 09:07:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.878 09:07:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:45.878 ************************************ 00:07:45.878 START TEST raid_write_error_test 00:07:45.878 ************************************ 00:07:45.878 09:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:07:45.878 09:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:45.878 09:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:45.878 09:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:45.878 09:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:45.878 09:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:45.878 09:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:45.878 09:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:45.878 09:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:45.878 09:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:45.878 09:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:45.878 09:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:45.878 09:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:45.878 09:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:45.878 09:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:45.878 09:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:45.878 09:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:45.878 09:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:45.878 09:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:45.878 09:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:45.878 09:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:45.878 09:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:45.878 09:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:45.878 09:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.VzQUheFnxF 00:07:45.878 09:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62590 00:07:45.878 09:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:45.878 09:07:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62590 00:07:45.878 09:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 62590 ']' 00:07:45.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.878 09:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.878 09:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:45.878 09:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.878 09:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:45.878 09:07:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.878 [2024-10-15 09:07:03.668978] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:07:45.878 [2024-10-15 09:07:03.669235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62590 ] 00:07:46.137 [2024-10-15 09:07:03.825719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.137 [2024-10-15 09:07:04.001348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.394 [2024-10-15 09:07:04.268971] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.394 [2024-10-15 09:07:04.269238] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.958 09:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:46.958 09:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:46.958 09:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:46.958 09:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:46.958 09:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.958 09:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.958 BaseBdev1_malloc 00:07:46.958 09:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.958 09:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:46.958 09:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.958 09:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.958 true 00:07:46.958 09:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.958 09:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:46.958 09:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.958 09:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.958 [2024-10-15 09:07:04.790744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:46.958 [2024-10-15 09:07:04.790838] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.958 [2024-10-15 09:07:04.790870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:46.958 [2024-10-15 09:07:04.790886] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.958 [2024-10-15 09:07:04.793761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.958 [2024-10-15 09:07:04.793940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:46.958 BaseBdev1 00:07:46.958 09:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.958 09:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:46.958 09:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:46.958 09:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.958 09:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.958 BaseBdev2_malloc 00:07:46.958 09:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.958 09:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:46.958 09:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.958 09:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.216 true 00:07:47.217 09:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.217 09:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:47.217 09:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.217 09:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.217 [2024-10-15 09:07:04.856655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:47.217 [2024-10-15 09:07:04.856769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.217 [2024-10-15 09:07:04.856799] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:47.217 [2024-10-15 09:07:04.856814] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.217 [2024-10-15 09:07:04.859793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.217 [2024-10-15 09:07:04.859965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:47.217 BaseBdev2 00:07:47.217 09:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.217 09:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:47.217 09:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.217 09:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.217 [2024-10-15 09:07:04.864857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:47.217 [2024-10-15 09:07:04.867353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:47.217 [2024-10-15 09:07:04.867724] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:47.217 [2024-10-15 09:07:04.867754] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:47.217 [2024-10-15 09:07:04.868133] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:47.217 [2024-10-15 09:07:04.868362] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:47.217 [2024-10-15 09:07:04.868381] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:47.217 [2024-10-15 09:07:04.868732] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.217 09:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.217 09:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:47.217 09:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:47.217 09:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:47.217 09:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:47.217 09:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.217 09:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.217 09:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.217 09:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.217 09:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.217 09:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.217 09:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.217 09:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:47.217 09:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.217 09:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.217 09:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.217 09:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.217 "name": "raid_bdev1", 00:07:47.217 "uuid": "f281dafe-afaa-4a0a-a6bb-be2fa2fec36a", 00:07:47.217 "strip_size_kb": 64, 00:07:47.217 "state": "online", 00:07:47.217 "raid_level": "concat", 00:07:47.217 "superblock": true, 00:07:47.217 "num_base_bdevs": 2, 00:07:47.217 "num_base_bdevs_discovered": 2, 00:07:47.217 "num_base_bdevs_operational": 2, 00:07:47.217 "base_bdevs_list": [ 00:07:47.217 { 00:07:47.217 "name": "BaseBdev1", 00:07:47.217 "uuid": "e87ea934-7408-576c-bcf3-adb37bc8a0c7", 00:07:47.217 "is_configured": true, 00:07:47.217 "data_offset": 2048, 00:07:47.217 "data_size": 63488 00:07:47.217 }, 00:07:47.217 { 00:07:47.217 "name": "BaseBdev2", 00:07:47.217 "uuid": "55e24f16-74cc-5a7d-823a-74667c13d700", 00:07:47.217 "is_configured": true, 00:07:47.217 "data_offset": 2048, 00:07:47.217 "data_size": 63488 00:07:47.217 } 00:07:47.217 ] 00:07:47.217 }' 00:07:47.217 09:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.217 09:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.782 09:07:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:47.782 09:07:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:47.782 [2024-10-15 09:07:05.561553] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:48.720 09:07:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:48.720 09:07:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.720 09:07:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.720 09:07:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.720 09:07:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:48.720 09:07:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:48.720 09:07:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:48.720 09:07:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:48.720 09:07:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:48.720 09:07:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:48.720 09:07:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:48.720 09:07:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.720 09:07:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.720 09:07:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.720 09:07:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.720 09:07:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.720 09:07:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.720 09:07:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.720 09:07:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:48.720 09:07:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.720 09:07:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.720 09:07:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.720 09:07:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.720 "name": "raid_bdev1", 00:07:48.720 "uuid": "f281dafe-afaa-4a0a-a6bb-be2fa2fec36a", 00:07:48.720 "strip_size_kb": 64, 00:07:48.721 "state": "online", 00:07:48.721 "raid_level": "concat", 00:07:48.721 "superblock": true, 00:07:48.721 "num_base_bdevs": 2, 00:07:48.721 "num_base_bdevs_discovered": 2, 00:07:48.721 "num_base_bdevs_operational": 2, 00:07:48.721 "base_bdevs_list": [ 00:07:48.721 { 00:07:48.721 "name": "BaseBdev1", 00:07:48.721 "uuid": "e87ea934-7408-576c-bcf3-adb37bc8a0c7", 00:07:48.721 "is_configured": true, 00:07:48.721 "data_offset": 2048, 00:07:48.721 "data_size": 63488 00:07:48.721 }, 00:07:48.721 { 00:07:48.721 "name": "BaseBdev2", 00:07:48.721 "uuid": "55e24f16-74cc-5a7d-823a-74667c13d700", 00:07:48.721 "is_configured": true, 00:07:48.721 "data_offset": 2048, 00:07:48.721 "data_size": 63488 00:07:48.721 } 00:07:48.721 ] 00:07:48.721 }' 00:07:48.721 09:07:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.721 09:07:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.286 09:07:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:49.286 09:07:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.286 09:07:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.286 [2024-10-15 09:07:06.903236] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:49.286 [2024-10-15 09:07:06.903297] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:49.286 [2024-10-15 09:07:06.906672] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:49.286 [2024-10-15 09:07:06.906897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.286 [2024-10-15 09:07:06.906971] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:49.286 [2024-10-15 09:07:06.907034] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:49.286 { 00:07:49.286 "results": [ 00:07:49.286 { 00:07:49.286 "job": "raid_bdev1", 00:07:49.286 "core_mask": "0x1", 00:07:49.286 "workload": "randrw", 00:07:49.286 "percentage": 50, 00:07:49.286 "status": "finished", 00:07:49.286 "queue_depth": 1, 00:07:49.286 "io_size": 131072, 00:07:49.286 "runtime": 1.341732, 00:07:49.286 "iops": 12123.136364042894, 00:07:49.286 "mibps": 1515.3920455053617, 00:07:49.286 "io_failed": 1, 00:07:49.286 "io_timeout": 0, 00:07:49.286 "avg_latency_us": 114.70112154083749, 00:07:49.286 "min_latency_us": 34.20786026200874, 00:07:49.286 "max_latency_us": 1917.4288209606987 00:07:49.286 } 00:07:49.286 ], 00:07:49.286 "core_count": 1 00:07:49.286 } 00:07:49.286 09:07:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.286 09:07:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62590 00:07:49.286 09:07:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 62590 ']' 00:07:49.286 09:07:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 62590 00:07:49.286 09:07:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:49.286 09:07:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:49.286 09:07:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62590 00:07:49.286 09:07:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:49.286 09:07:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:49.286 09:07:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62590' 00:07:49.286 killing process with pid 62590 00:07:49.286 09:07:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 62590 00:07:49.286 09:07:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 62590 00:07:49.286 [2024-10-15 09:07:06.942177] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:49.286 [2024-10-15 09:07:07.111977] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:51.186 09:07:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.VzQUheFnxF 00:07:51.186 09:07:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:51.186 09:07:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:51.186 09:07:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:07:51.186 09:07:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:51.186 09:07:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:51.186 09:07:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:51.186 09:07:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:07:51.186 00:07:51.186 real 0m5.016s 00:07:51.186 user 0m6.237s 00:07:51.186 sys 0m0.558s 00:07:51.186 09:07:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.186 09:07:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.186 ************************************ 00:07:51.186 END TEST raid_write_error_test 00:07:51.186 ************************************ 00:07:51.186 09:07:08 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:51.186 09:07:08 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:51.186 09:07:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:51.186 09:07:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.186 09:07:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:51.186 ************************************ 00:07:51.186 START TEST raid_state_function_test 00:07:51.186 ************************************ 00:07:51.186 09:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:07:51.186 09:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:51.186 09:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:51.186 09:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:51.186 09:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:51.186 09:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:51.186 09:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:51.186 09:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:51.186 09:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:51.186 09:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:51.186 09:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:51.186 09:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:51.186 09:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:51.186 09:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:51.186 09:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:51.187 09:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:51.187 09:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:51.187 09:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:51.187 09:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:51.187 09:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:51.187 09:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:51.187 09:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:51.187 09:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:51.187 Process raid pid: 62739 00:07:51.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.187 09:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62739 00:07:51.187 09:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:51.187 09:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62739' 00:07:51.187 09:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62739 00:07:51.187 09:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 62739 ']' 00:07:51.187 09:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.187 09:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:51.187 09:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.187 09:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:51.187 09:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.187 [2024-10-15 09:07:08.723660] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:07:51.187 [2024-10-15 09:07:08.723834] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.187 [2024-10-15 09:07:08.885077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.187 [2024-10-15 09:07:09.038703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.451 [2024-10-15 09:07:09.303955] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.451 [2024-10-15 09:07:09.304025] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:52.018 09:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:52.018 09:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:52.018 09:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:52.018 09:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.018 09:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.018 [2024-10-15 09:07:09.811511] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:52.018 [2024-10-15 09:07:09.811624] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:52.018 [2024-10-15 09:07:09.811645] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:52.018 [2024-10-15 09:07:09.811665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:52.018 09:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.018 09:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:52.018 09:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.018 09:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:52.018 09:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:52.018 09:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:52.018 09:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.018 09:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.018 09:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.018 09:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.018 09:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.018 09:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.018 09:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.018 09:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.018 09:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.018 09:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.018 09:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.018 "name": "Existed_Raid", 00:07:52.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.018 "strip_size_kb": 0, 00:07:52.018 "state": "configuring", 00:07:52.018 "raid_level": "raid1", 00:07:52.018 "superblock": false, 00:07:52.018 "num_base_bdevs": 2, 00:07:52.018 "num_base_bdevs_discovered": 0, 00:07:52.018 "num_base_bdevs_operational": 2, 00:07:52.018 "base_bdevs_list": [ 00:07:52.018 { 00:07:52.018 "name": "BaseBdev1", 00:07:52.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.018 "is_configured": false, 00:07:52.018 "data_offset": 0, 00:07:52.018 "data_size": 0 00:07:52.018 }, 00:07:52.018 { 00:07:52.018 "name": "BaseBdev2", 00:07:52.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.018 "is_configured": false, 00:07:52.018 "data_offset": 0, 00:07:52.018 "data_size": 0 00:07:52.018 } 00:07:52.018 ] 00:07:52.018 }' 00:07:52.018 09:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.018 09:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.586 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:52.586 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.586 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.586 [2024-10-15 09:07:10.342541] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:52.586 [2024-10-15 09:07:10.342759] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:52.586 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.586 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:52.586 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.586 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.586 [2024-10-15 09:07:10.350569] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:52.586 [2024-10-15 09:07:10.350642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:52.586 [2024-10-15 09:07:10.350655] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:52.586 [2024-10-15 09:07:10.350669] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:52.586 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.586 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:52.586 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.586 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.586 [2024-10-15 09:07:10.400521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:52.586 BaseBdev1 00:07:52.586 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.586 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:52.586 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:52.586 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:52.586 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:52.586 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:52.586 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:52.586 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:52.586 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.586 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.586 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.586 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:52.586 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.586 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.586 [ 00:07:52.586 { 00:07:52.586 "name": "BaseBdev1", 00:07:52.586 "aliases": [ 00:07:52.586 "a5812e01-1568-4290-8d46-f120325846a3" 00:07:52.586 ], 00:07:52.586 "product_name": "Malloc disk", 00:07:52.586 "block_size": 512, 00:07:52.586 "num_blocks": 65536, 00:07:52.586 "uuid": "a5812e01-1568-4290-8d46-f120325846a3", 00:07:52.586 "assigned_rate_limits": { 00:07:52.586 "rw_ios_per_sec": 0, 00:07:52.586 "rw_mbytes_per_sec": 0, 00:07:52.586 "r_mbytes_per_sec": 0, 00:07:52.586 "w_mbytes_per_sec": 0 00:07:52.586 }, 00:07:52.586 "claimed": true, 00:07:52.587 "claim_type": "exclusive_write", 00:07:52.587 "zoned": false, 00:07:52.587 "supported_io_types": { 00:07:52.587 "read": true, 00:07:52.587 "write": true, 00:07:52.587 "unmap": true, 00:07:52.587 "flush": true, 00:07:52.587 "reset": true, 00:07:52.587 "nvme_admin": false, 00:07:52.587 "nvme_io": false, 00:07:52.587 "nvme_io_md": false, 00:07:52.587 "write_zeroes": true, 00:07:52.587 "zcopy": true, 00:07:52.587 "get_zone_info": false, 00:07:52.587 "zone_management": false, 00:07:52.587 "zone_append": false, 00:07:52.587 "compare": false, 00:07:52.587 "compare_and_write": false, 00:07:52.587 "abort": true, 00:07:52.587 "seek_hole": false, 00:07:52.587 "seek_data": false, 00:07:52.587 "copy": true, 00:07:52.587 "nvme_iov_md": false 00:07:52.587 }, 00:07:52.587 "memory_domains": [ 00:07:52.587 { 00:07:52.587 "dma_device_id": "system", 00:07:52.587 "dma_device_type": 1 00:07:52.587 }, 00:07:52.587 { 00:07:52.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.587 "dma_device_type": 2 00:07:52.587 } 00:07:52.587 ], 00:07:52.587 "driver_specific": {} 00:07:52.587 } 00:07:52.587 ] 00:07:52.587 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.587 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:52.587 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:52.587 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.587 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:52.587 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:52.587 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:52.587 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.587 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.587 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.587 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.587 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.587 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.587 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.587 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.587 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.587 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.846 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.846 "name": "Existed_Raid", 00:07:52.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.846 "strip_size_kb": 0, 00:07:52.846 "state": "configuring", 00:07:52.846 "raid_level": "raid1", 00:07:52.846 "superblock": false, 00:07:52.846 "num_base_bdevs": 2, 00:07:52.846 "num_base_bdevs_discovered": 1, 00:07:52.846 "num_base_bdevs_operational": 2, 00:07:52.846 "base_bdevs_list": [ 00:07:52.846 { 00:07:52.846 "name": "BaseBdev1", 00:07:52.846 "uuid": "a5812e01-1568-4290-8d46-f120325846a3", 00:07:52.846 "is_configured": true, 00:07:52.846 "data_offset": 0, 00:07:52.846 "data_size": 65536 00:07:52.846 }, 00:07:52.846 { 00:07:52.846 "name": "BaseBdev2", 00:07:52.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.846 "is_configured": false, 00:07:52.846 "data_offset": 0, 00:07:52.846 "data_size": 0 00:07:52.846 } 00:07:52.846 ] 00:07:52.846 }' 00:07:52.846 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.846 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.105 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:53.105 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.105 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.105 [2024-10-15 09:07:10.875868] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:53.105 [2024-10-15 09:07:10.876026] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:53.105 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.105 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:53.105 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.105 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.105 [2024-10-15 09:07:10.887978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:53.105 [2024-10-15 09:07:10.890309] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:53.105 [2024-10-15 09:07:10.890450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:53.105 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.105 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:53.105 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:53.105 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:53.105 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.105 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:53.105 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:53.105 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:53.105 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:53.105 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.105 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.105 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.105 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.105 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.105 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.105 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.105 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.105 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.105 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.105 "name": "Existed_Raid", 00:07:53.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.105 "strip_size_kb": 0, 00:07:53.105 "state": "configuring", 00:07:53.105 "raid_level": "raid1", 00:07:53.105 "superblock": false, 00:07:53.105 "num_base_bdevs": 2, 00:07:53.105 "num_base_bdevs_discovered": 1, 00:07:53.105 "num_base_bdevs_operational": 2, 00:07:53.105 "base_bdevs_list": [ 00:07:53.105 { 00:07:53.105 "name": "BaseBdev1", 00:07:53.105 "uuid": "a5812e01-1568-4290-8d46-f120325846a3", 00:07:53.105 "is_configured": true, 00:07:53.105 "data_offset": 0, 00:07:53.105 "data_size": 65536 00:07:53.105 }, 00:07:53.105 { 00:07:53.105 "name": "BaseBdev2", 00:07:53.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.105 "is_configured": false, 00:07:53.105 "data_offset": 0, 00:07:53.105 "data_size": 0 00:07:53.105 } 00:07:53.105 ] 00:07:53.105 }' 00:07:53.105 09:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.105 09:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.673 [2024-10-15 09:07:11.388858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:53.673 [2024-10-15 09:07:11.388927] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:53.673 [2024-10-15 09:07:11.388937] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:53.673 [2024-10-15 09:07:11.389260] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:53.673 [2024-10-15 09:07:11.389465] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:53.673 [2024-10-15 09:07:11.389483] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:53.673 [2024-10-15 09:07:11.389834] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:53.673 BaseBdev2 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.673 [ 00:07:53.673 { 00:07:53.673 "name": "BaseBdev2", 00:07:53.673 "aliases": [ 00:07:53.673 "07c783ec-44f6-41a5-9ebb-ad1b0f64db13" 00:07:53.673 ], 00:07:53.673 "product_name": "Malloc disk", 00:07:53.673 "block_size": 512, 00:07:53.673 "num_blocks": 65536, 00:07:53.673 "uuid": "07c783ec-44f6-41a5-9ebb-ad1b0f64db13", 00:07:53.673 "assigned_rate_limits": { 00:07:53.673 "rw_ios_per_sec": 0, 00:07:53.673 "rw_mbytes_per_sec": 0, 00:07:53.673 "r_mbytes_per_sec": 0, 00:07:53.673 "w_mbytes_per_sec": 0 00:07:53.673 }, 00:07:53.673 "claimed": true, 00:07:53.673 "claim_type": "exclusive_write", 00:07:53.673 "zoned": false, 00:07:53.673 "supported_io_types": { 00:07:53.673 "read": true, 00:07:53.673 "write": true, 00:07:53.673 "unmap": true, 00:07:53.673 "flush": true, 00:07:53.673 "reset": true, 00:07:53.673 "nvme_admin": false, 00:07:53.673 "nvme_io": false, 00:07:53.673 "nvme_io_md": false, 00:07:53.673 "write_zeroes": true, 00:07:53.673 "zcopy": true, 00:07:53.673 "get_zone_info": false, 00:07:53.673 "zone_management": false, 00:07:53.673 "zone_append": false, 00:07:53.673 "compare": false, 00:07:53.673 "compare_and_write": false, 00:07:53.673 "abort": true, 00:07:53.673 "seek_hole": false, 00:07:53.673 "seek_data": false, 00:07:53.673 "copy": true, 00:07:53.673 "nvme_iov_md": false 00:07:53.673 }, 00:07:53.673 "memory_domains": [ 00:07:53.673 { 00:07:53.673 "dma_device_id": "system", 00:07:53.673 "dma_device_type": 1 00:07:53.673 }, 00:07:53.673 { 00:07:53.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.673 "dma_device_type": 2 00:07:53.673 } 00:07:53.673 ], 00:07:53.673 "driver_specific": {} 00:07:53.673 } 00:07:53.673 ] 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.673 "name": "Existed_Raid", 00:07:53.673 "uuid": "c855cb05-2614-40b7-8133-75df9d4966ac", 00:07:53.673 "strip_size_kb": 0, 00:07:53.673 "state": "online", 00:07:53.673 "raid_level": "raid1", 00:07:53.673 "superblock": false, 00:07:53.673 "num_base_bdevs": 2, 00:07:53.673 "num_base_bdevs_discovered": 2, 00:07:53.673 "num_base_bdevs_operational": 2, 00:07:53.673 "base_bdevs_list": [ 00:07:53.673 { 00:07:53.673 "name": "BaseBdev1", 00:07:53.673 "uuid": "a5812e01-1568-4290-8d46-f120325846a3", 00:07:53.673 "is_configured": true, 00:07:53.673 "data_offset": 0, 00:07:53.673 "data_size": 65536 00:07:53.673 }, 00:07:53.673 { 00:07:53.673 "name": "BaseBdev2", 00:07:53.673 "uuid": "07c783ec-44f6-41a5-9ebb-ad1b0f64db13", 00:07:53.673 "is_configured": true, 00:07:53.673 "data_offset": 0, 00:07:53.673 "data_size": 65536 00:07:53.673 } 00:07:53.673 ] 00:07:53.673 }' 00:07:53.673 09:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.674 09:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.327 09:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:54.327 09:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:54.327 09:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:54.327 09:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:54.327 09:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:54.327 09:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:54.327 09:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:54.327 09:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.327 09:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:54.327 09:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.327 [2024-10-15 09:07:11.896598] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:54.327 09:07:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.327 09:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:54.327 "name": "Existed_Raid", 00:07:54.327 "aliases": [ 00:07:54.327 "c855cb05-2614-40b7-8133-75df9d4966ac" 00:07:54.327 ], 00:07:54.327 "product_name": "Raid Volume", 00:07:54.327 "block_size": 512, 00:07:54.327 "num_blocks": 65536, 00:07:54.327 "uuid": "c855cb05-2614-40b7-8133-75df9d4966ac", 00:07:54.327 "assigned_rate_limits": { 00:07:54.327 "rw_ios_per_sec": 0, 00:07:54.327 "rw_mbytes_per_sec": 0, 00:07:54.327 "r_mbytes_per_sec": 0, 00:07:54.327 "w_mbytes_per_sec": 0 00:07:54.327 }, 00:07:54.327 "claimed": false, 00:07:54.327 "zoned": false, 00:07:54.327 "supported_io_types": { 00:07:54.327 "read": true, 00:07:54.327 "write": true, 00:07:54.327 "unmap": false, 00:07:54.327 "flush": false, 00:07:54.327 "reset": true, 00:07:54.327 "nvme_admin": false, 00:07:54.327 "nvme_io": false, 00:07:54.327 "nvme_io_md": false, 00:07:54.327 "write_zeroes": true, 00:07:54.327 "zcopy": false, 00:07:54.328 "get_zone_info": false, 00:07:54.328 "zone_management": false, 00:07:54.328 "zone_append": false, 00:07:54.328 "compare": false, 00:07:54.328 "compare_and_write": false, 00:07:54.328 "abort": false, 00:07:54.328 "seek_hole": false, 00:07:54.328 "seek_data": false, 00:07:54.328 "copy": false, 00:07:54.328 "nvme_iov_md": false 00:07:54.328 }, 00:07:54.328 "memory_domains": [ 00:07:54.328 { 00:07:54.328 "dma_device_id": "system", 00:07:54.328 "dma_device_type": 1 00:07:54.328 }, 00:07:54.328 { 00:07:54.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.328 "dma_device_type": 2 00:07:54.328 }, 00:07:54.328 { 00:07:54.328 "dma_device_id": "system", 00:07:54.328 "dma_device_type": 1 00:07:54.328 }, 00:07:54.328 { 00:07:54.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.328 "dma_device_type": 2 00:07:54.328 } 00:07:54.328 ], 00:07:54.328 "driver_specific": { 00:07:54.328 "raid": { 00:07:54.328 "uuid": "c855cb05-2614-40b7-8133-75df9d4966ac", 00:07:54.328 "strip_size_kb": 0, 00:07:54.328 "state": "online", 00:07:54.328 "raid_level": "raid1", 00:07:54.328 "superblock": false, 00:07:54.328 "num_base_bdevs": 2, 00:07:54.328 "num_base_bdevs_discovered": 2, 00:07:54.328 "num_base_bdevs_operational": 2, 00:07:54.328 "base_bdevs_list": [ 00:07:54.328 { 00:07:54.328 "name": "BaseBdev1", 00:07:54.328 "uuid": "a5812e01-1568-4290-8d46-f120325846a3", 00:07:54.328 "is_configured": true, 00:07:54.328 "data_offset": 0, 00:07:54.328 "data_size": 65536 00:07:54.328 }, 00:07:54.328 { 00:07:54.328 "name": "BaseBdev2", 00:07:54.328 "uuid": "07c783ec-44f6-41a5-9ebb-ad1b0f64db13", 00:07:54.328 "is_configured": true, 00:07:54.328 "data_offset": 0, 00:07:54.328 "data_size": 65536 00:07:54.328 } 00:07:54.328 ] 00:07:54.328 } 00:07:54.328 } 00:07:54.328 }' 00:07:54.328 09:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:54.328 09:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:54.328 BaseBdev2' 00:07:54.328 09:07:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.328 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:54.328 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:54.328 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:54.328 09:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.328 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.328 09:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.328 09:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.328 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:54.328 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:54.328 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:54.328 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:54.328 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.328 09:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.328 09:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.328 09:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.328 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:54.328 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:54.328 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:54.328 09:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.328 09:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.328 [2024-10-15 09:07:12.151958] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:54.587 09:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.587 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:54.587 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:54.587 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:54.587 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:54.587 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:54.587 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:54.587 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:54.587 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:54.587 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:54.587 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:54.587 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:54.587 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.587 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.587 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.587 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.587 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.587 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.587 09:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.587 09:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.587 09:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.587 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.587 "name": "Existed_Raid", 00:07:54.587 "uuid": "c855cb05-2614-40b7-8133-75df9d4966ac", 00:07:54.587 "strip_size_kb": 0, 00:07:54.587 "state": "online", 00:07:54.587 "raid_level": "raid1", 00:07:54.587 "superblock": false, 00:07:54.587 "num_base_bdevs": 2, 00:07:54.587 "num_base_bdevs_discovered": 1, 00:07:54.587 "num_base_bdevs_operational": 1, 00:07:54.587 "base_bdevs_list": [ 00:07:54.587 { 00:07:54.587 "name": null, 00:07:54.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.587 "is_configured": false, 00:07:54.587 "data_offset": 0, 00:07:54.588 "data_size": 65536 00:07:54.588 }, 00:07:54.588 { 00:07:54.588 "name": "BaseBdev2", 00:07:54.588 "uuid": "07c783ec-44f6-41a5-9ebb-ad1b0f64db13", 00:07:54.588 "is_configured": true, 00:07:54.588 "data_offset": 0, 00:07:54.588 "data_size": 65536 00:07:54.588 } 00:07:54.588 ] 00:07:54.588 }' 00:07:54.588 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.588 09:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.154 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:55.154 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:55.154 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.154 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:55.154 09:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.154 09:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.154 09:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.154 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:55.154 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:55.154 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:55.154 09:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.154 09:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.154 [2024-10-15 09:07:12.820389] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:55.154 [2024-10-15 09:07:12.820641] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:55.154 [2024-10-15 09:07:12.939561] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:55.154 [2024-10-15 09:07:12.939776] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:55.154 [2024-10-15 09:07:12.939843] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:55.154 09:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.154 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:55.154 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:55.154 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.154 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:55.154 09:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.154 09:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.154 09:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.154 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:55.154 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:55.154 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:55.154 09:07:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62739 00:07:55.154 09:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 62739 ']' 00:07:55.154 09:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 62739 00:07:55.154 09:07:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:55.154 09:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:55.154 09:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62739 00:07:55.154 killing process with pid 62739 00:07:55.154 09:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:55.155 09:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:55.155 09:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62739' 00:07:55.155 09:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 62739 00:07:55.155 09:07:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 62739 00:07:55.155 [2024-10-15 09:07:13.028083] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:55.155 [2024-10-15 09:07:13.048884] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:56.530 09:07:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:56.530 ************************************ 00:07:56.530 END TEST raid_state_function_test 00:07:56.530 ************************************ 00:07:56.530 00:07:56.530 real 0m5.805s 00:07:56.530 user 0m8.446s 00:07:56.530 sys 0m0.770s 00:07:56.530 09:07:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.530 09:07:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.789 09:07:14 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:56.789 09:07:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:56.789 09:07:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.789 09:07:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:56.789 ************************************ 00:07:56.789 START TEST raid_state_function_test_sb 00:07:56.789 ************************************ 00:07:56.789 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:07:56.789 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:56.789 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:56.789 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:56.789 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:56.789 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:56.789 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:56.789 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:56.790 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:56.790 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:56.790 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:56.790 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:56.790 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:56.790 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:56.790 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:56.790 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:56.790 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:56.790 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:56.790 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:56.790 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:56.790 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:56.790 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:56.790 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:56.790 Process raid pid: 62998 00:07:56.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.790 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62998 00:07:56.790 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:56.790 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62998' 00:07:56.790 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62998 00:07:56.790 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 62998 ']' 00:07:56.790 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.790 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:56.790 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.790 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:56.790 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.790 [2024-10-15 09:07:14.591456] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:07:56.790 [2024-10-15 09:07:14.591805] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.048 [2024-10-15 09:07:14.767550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.048 [2024-10-15 09:07:14.944370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.313 [2024-10-15 09:07:15.201767] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.313 [2024-10-15 09:07:15.201933] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.881 09:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:57.881 09:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:57.881 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:57.881 09:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.881 09:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.881 [2024-10-15 09:07:15.583018] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:57.881 [2024-10-15 09:07:15.583196] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:57.881 [2024-10-15 09:07:15.583239] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:57.881 [2024-10-15 09:07:15.583278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:57.881 09:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.881 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:57.881 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.881 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.881 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:57.881 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:57.881 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.881 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.881 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.881 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.881 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.881 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.881 09:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.881 09:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.881 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.881 09:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.881 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.881 "name": "Existed_Raid", 00:07:57.881 "uuid": "5dabc3bc-ffb0-4759-87fc-861416c77058", 00:07:57.881 "strip_size_kb": 0, 00:07:57.881 "state": "configuring", 00:07:57.881 "raid_level": "raid1", 00:07:57.881 "superblock": true, 00:07:57.881 "num_base_bdevs": 2, 00:07:57.881 "num_base_bdevs_discovered": 0, 00:07:57.881 "num_base_bdevs_operational": 2, 00:07:57.881 "base_bdevs_list": [ 00:07:57.881 { 00:07:57.881 "name": "BaseBdev1", 00:07:57.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.881 "is_configured": false, 00:07:57.881 "data_offset": 0, 00:07:57.881 "data_size": 0 00:07:57.881 }, 00:07:57.881 { 00:07:57.881 "name": "BaseBdev2", 00:07:57.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.881 "is_configured": false, 00:07:57.881 "data_offset": 0, 00:07:57.881 "data_size": 0 00:07:57.881 } 00:07:57.881 ] 00:07:57.881 }' 00:07:57.881 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.881 09:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.139 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:58.139 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.139 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.139 [2024-10-15 09:07:16.030172] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:58.139 [2024-10-15 09:07:16.030218] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:58.139 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.139 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:58.139 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.139 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.398 [2024-10-15 09:07:16.038217] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:58.398 [2024-10-15 09:07:16.038284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:58.398 [2024-10-15 09:07:16.038297] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:58.398 [2024-10-15 09:07:16.038311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:58.398 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.398 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:58.398 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.398 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.398 [2024-10-15 09:07:16.088830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:58.398 BaseBdev1 00:07:58.398 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.398 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:58.398 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:58.398 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:58.398 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:58.398 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:58.398 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:58.398 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:58.398 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.398 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.398 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.398 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:58.398 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.398 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.399 [ 00:07:58.399 { 00:07:58.399 "name": "BaseBdev1", 00:07:58.399 "aliases": [ 00:07:58.399 "d0d5874a-73e9-4e3c-ba00-cf821201001a" 00:07:58.399 ], 00:07:58.399 "product_name": "Malloc disk", 00:07:58.399 "block_size": 512, 00:07:58.399 "num_blocks": 65536, 00:07:58.399 "uuid": "d0d5874a-73e9-4e3c-ba00-cf821201001a", 00:07:58.399 "assigned_rate_limits": { 00:07:58.399 "rw_ios_per_sec": 0, 00:07:58.399 "rw_mbytes_per_sec": 0, 00:07:58.399 "r_mbytes_per_sec": 0, 00:07:58.399 "w_mbytes_per_sec": 0 00:07:58.399 }, 00:07:58.399 "claimed": true, 00:07:58.399 "claim_type": "exclusive_write", 00:07:58.399 "zoned": false, 00:07:58.399 "supported_io_types": { 00:07:58.399 "read": true, 00:07:58.399 "write": true, 00:07:58.399 "unmap": true, 00:07:58.399 "flush": true, 00:07:58.399 "reset": true, 00:07:58.399 "nvme_admin": false, 00:07:58.399 "nvme_io": false, 00:07:58.399 "nvme_io_md": false, 00:07:58.399 "write_zeroes": true, 00:07:58.399 "zcopy": true, 00:07:58.399 "get_zone_info": false, 00:07:58.399 "zone_management": false, 00:07:58.399 "zone_append": false, 00:07:58.399 "compare": false, 00:07:58.399 "compare_and_write": false, 00:07:58.399 "abort": true, 00:07:58.399 "seek_hole": false, 00:07:58.399 "seek_data": false, 00:07:58.399 "copy": true, 00:07:58.399 "nvme_iov_md": false 00:07:58.399 }, 00:07:58.399 "memory_domains": [ 00:07:58.399 { 00:07:58.399 "dma_device_id": "system", 00:07:58.399 "dma_device_type": 1 00:07:58.399 }, 00:07:58.399 { 00:07:58.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.399 "dma_device_type": 2 00:07:58.399 } 00:07:58.399 ], 00:07:58.399 "driver_specific": {} 00:07:58.399 } 00:07:58.399 ] 00:07:58.399 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.399 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:58.399 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:58.399 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.399 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:58.399 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:58.399 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:58.399 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.399 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.399 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.399 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.399 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.399 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.399 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.399 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.399 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.399 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.399 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.399 "name": "Existed_Raid", 00:07:58.399 "uuid": "a7870b29-b5c9-4d45-b671-867c1f2bb311", 00:07:58.399 "strip_size_kb": 0, 00:07:58.399 "state": "configuring", 00:07:58.399 "raid_level": "raid1", 00:07:58.399 "superblock": true, 00:07:58.399 "num_base_bdevs": 2, 00:07:58.399 "num_base_bdevs_discovered": 1, 00:07:58.399 "num_base_bdevs_operational": 2, 00:07:58.399 "base_bdevs_list": [ 00:07:58.399 { 00:07:58.399 "name": "BaseBdev1", 00:07:58.399 "uuid": "d0d5874a-73e9-4e3c-ba00-cf821201001a", 00:07:58.399 "is_configured": true, 00:07:58.399 "data_offset": 2048, 00:07:58.399 "data_size": 63488 00:07:58.399 }, 00:07:58.399 { 00:07:58.399 "name": "BaseBdev2", 00:07:58.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.399 "is_configured": false, 00:07:58.399 "data_offset": 0, 00:07:58.399 "data_size": 0 00:07:58.399 } 00:07:58.399 ] 00:07:58.399 }' 00:07:58.399 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.399 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.966 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:58.966 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.966 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.966 [2024-10-15 09:07:16.600138] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:58.966 [2024-10-15 09:07:16.600299] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:58.966 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.966 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:58.966 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.966 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.966 [2024-10-15 09:07:16.608191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:58.966 [2024-10-15 09:07:16.610464] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:58.966 [2024-10-15 09:07:16.610571] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:58.966 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.966 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:58.966 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:58.966 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:58.966 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.966 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:58.966 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:58.966 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:58.966 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.966 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.966 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.966 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.966 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.966 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.966 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.966 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.966 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.966 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.966 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.966 "name": "Existed_Raid", 00:07:58.966 "uuid": "57324ab5-5e26-440d-b407-9b577b77a9cd", 00:07:58.966 "strip_size_kb": 0, 00:07:58.966 "state": "configuring", 00:07:58.966 "raid_level": "raid1", 00:07:58.966 "superblock": true, 00:07:58.966 "num_base_bdevs": 2, 00:07:58.966 "num_base_bdevs_discovered": 1, 00:07:58.966 "num_base_bdevs_operational": 2, 00:07:58.966 "base_bdevs_list": [ 00:07:58.966 { 00:07:58.966 "name": "BaseBdev1", 00:07:58.966 "uuid": "d0d5874a-73e9-4e3c-ba00-cf821201001a", 00:07:58.966 "is_configured": true, 00:07:58.966 "data_offset": 2048, 00:07:58.966 "data_size": 63488 00:07:58.966 }, 00:07:58.966 { 00:07:58.966 "name": "BaseBdev2", 00:07:58.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.966 "is_configured": false, 00:07:58.966 "data_offset": 0, 00:07:58.966 "data_size": 0 00:07:58.966 } 00:07:58.966 ] 00:07:58.966 }' 00:07:58.966 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.966 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.225 [2024-10-15 09:07:17.076407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:59.225 [2024-10-15 09:07:17.076766] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:59.225 [2024-10-15 09:07:17.076786] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:59.225 [2024-10-15 09:07:17.077110] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:59.225 [2024-10-15 09:07:17.077328] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:59.225 [2024-10-15 09:07:17.077349] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:59.225 BaseBdev2 00:07:59.225 [2024-10-15 09:07:17.077549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.225 [ 00:07:59.225 { 00:07:59.225 "name": "BaseBdev2", 00:07:59.225 "aliases": [ 00:07:59.225 "f1696fdb-7af8-4eaf-9f13-27db34d12617" 00:07:59.225 ], 00:07:59.225 "product_name": "Malloc disk", 00:07:59.225 "block_size": 512, 00:07:59.225 "num_blocks": 65536, 00:07:59.225 "uuid": "f1696fdb-7af8-4eaf-9f13-27db34d12617", 00:07:59.225 "assigned_rate_limits": { 00:07:59.225 "rw_ios_per_sec": 0, 00:07:59.225 "rw_mbytes_per_sec": 0, 00:07:59.225 "r_mbytes_per_sec": 0, 00:07:59.225 "w_mbytes_per_sec": 0 00:07:59.225 }, 00:07:59.225 "claimed": true, 00:07:59.225 "claim_type": "exclusive_write", 00:07:59.225 "zoned": false, 00:07:59.225 "supported_io_types": { 00:07:59.225 "read": true, 00:07:59.225 "write": true, 00:07:59.225 "unmap": true, 00:07:59.225 "flush": true, 00:07:59.225 "reset": true, 00:07:59.225 "nvme_admin": false, 00:07:59.225 "nvme_io": false, 00:07:59.225 "nvme_io_md": false, 00:07:59.225 "write_zeroes": true, 00:07:59.225 "zcopy": true, 00:07:59.225 "get_zone_info": false, 00:07:59.225 "zone_management": false, 00:07:59.225 "zone_append": false, 00:07:59.225 "compare": false, 00:07:59.225 "compare_and_write": false, 00:07:59.225 "abort": true, 00:07:59.225 "seek_hole": false, 00:07:59.225 "seek_data": false, 00:07:59.225 "copy": true, 00:07:59.225 "nvme_iov_md": false 00:07:59.225 }, 00:07:59.225 "memory_domains": [ 00:07:59.225 { 00:07:59.225 "dma_device_id": "system", 00:07:59.225 "dma_device_type": 1 00:07:59.225 }, 00:07:59.225 { 00:07:59.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.225 "dma_device_type": 2 00:07:59.225 } 00:07:59.225 ], 00:07:59.225 "driver_specific": {} 00:07:59.225 } 00:07:59.225 ] 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.225 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.484 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.484 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.484 "name": "Existed_Raid", 00:07:59.484 "uuid": "57324ab5-5e26-440d-b407-9b577b77a9cd", 00:07:59.484 "strip_size_kb": 0, 00:07:59.484 "state": "online", 00:07:59.484 "raid_level": "raid1", 00:07:59.484 "superblock": true, 00:07:59.484 "num_base_bdevs": 2, 00:07:59.484 "num_base_bdevs_discovered": 2, 00:07:59.484 "num_base_bdevs_operational": 2, 00:07:59.484 "base_bdevs_list": [ 00:07:59.484 { 00:07:59.484 "name": "BaseBdev1", 00:07:59.484 "uuid": "d0d5874a-73e9-4e3c-ba00-cf821201001a", 00:07:59.484 "is_configured": true, 00:07:59.484 "data_offset": 2048, 00:07:59.484 "data_size": 63488 00:07:59.484 }, 00:07:59.484 { 00:07:59.484 "name": "BaseBdev2", 00:07:59.484 "uuid": "f1696fdb-7af8-4eaf-9f13-27db34d12617", 00:07:59.484 "is_configured": true, 00:07:59.484 "data_offset": 2048, 00:07:59.484 "data_size": 63488 00:07:59.484 } 00:07:59.484 ] 00:07:59.484 }' 00:07:59.484 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.484 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.742 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:59.742 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:59.742 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:59.742 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:59.742 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:59.742 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:59.742 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:59.742 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:59.742 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.742 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.742 [2024-10-15 09:07:17.584295] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:59.742 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.742 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:59.742 "name": "Existed_Raid", 00:07:59.742 "aliases": [ 00:07:59.742 "57324ab5-5e26-440d-b407-9b577b77a9cd" 00:07:59.742 ], 00:07:59.742 "product_name": "Raid Volume", 00:07:59.742 "block_size": 512, 00:07:59.742 "num_blocks": 63488, 00:07:59.742 "uuid": "57324ab5-5e26-440d-b407-9b577b77a9cd", 00:07:59.742 "assigned_rate_limits": { 00:07:59.742 "rw_ios_per_sec": 0, 00:07:59.742 "rw_mbytes_per_sec": 0, 00:07:59.742 "r_mbytes_per_sec": 0, 00:07:59.742 "w_mbytes_per_sec": 0 00:07:59.742 }, 00:07:59.742 "claimed": false, 00:07:59.742 "zoned": false, 00:07:59.742 "supported_io_types": { 00:07:59.742 "read": true, 00:07:59.742 "write": true, 00:07:59.742 "unmap": false, 00:07:59.742 "flush": false, 00:07:59.742 "reset": true, 00:07:59.742 "nvme_admin": false, 00:07:59.742 "nvme_io": false, 00:07:59.742 "nvme_io_md": false, 00:07:59.742 "write_zeroes": true, 00:07:59.742 "zcopy": false, 00:07:59.742 "get_zone_info": false, 00:07:59.742 "zone_management": false, 00:07:59.742 "zone_append": false, 00:07:59.742 "compare": false, 00:07:59.742 "compare_and_write": false, 00:07:59.742 "abort": false, 00:07:59.742 "seek_hole": false, 00:07:59.742 "seek_data": false, 00:07:59.742 "copy": false, 00:07:59.742 "nvme_iov_md": false 00:07:59.742 }, 00:07:59.742 "memory_domains": [ 00:07:59.742 { 00:07:59.742 "dma_device_id": "system", 00:07:59.742 "dma_device_type": 1 00:07:59.742 }, 00:07:59.742 { 00:07:59.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.742 "dma_device_type": 2 00:07:59.742 }, 00:07:59.742 { 00:07:59.742 "dma_device_id": "system", 00:07:59.742 "dma_device_type": 1 00:07:59.742 }, 00:07:59.742 { 00:07:59.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.742 "dma_device_type": 2 00:07:59.742 } 00:07:59.742 ], 00:07:59.742 "driver_specific": { 00:07:59.742 "raid": { 00:07:59.742 "uuid": "57324ab5-5e26-440d-b407-9b577b77a9cd", 00:07:59.742 "strip_size_kb": 0, 00:07:59.742 "state": "online", 00:07:59.742 "raid_level": "raid1", 00:07:59.742 "superblock": true, 00:07:59.742 "num_base_bdevs": 2, 00:07:59.742 "num_base_bdevs_discovered": 2, 00:07:59.742 "num_base_bdevs_operational": 2, 00:07:59.742 "base_bdevs_list": [ 00:07:59.742 { 00:07:59.742 "name": "BaseBdev1", 00:07:59.742 "uuid": "d0d5874a-73e9-4e3c-ba00-cf821201001a", 00:07:59.742 "is_configured": true, 00:07:59.742 "data_offset": 2048, 00:07:59.742 "data_size": 63488 00:07:59.742 }, 00:07:59.742 { 00:07:59.742 "name": "BaseBdev2", 00:07:59.742 "uuid": "f1696fdb-7af8-4eaf-9f13-27db34d12617", 00:07:59.742 "is_configured": true, 00:07:59.742 "data_offset": 2048, 00:07:59.742 "data_size": 63488 00:07:59.742 } 00:07:59.742 ] 00:07:59.742 } 00:07:59.742 } 00:07:59.742 }' 00:07:59.742 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:00.037 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:00.037 BaseBdev2' 00:08:00.037 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:00.037 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:00.037 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:00.037 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:00.037 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.037 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:00.037 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.037 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.037 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:00.037 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:00.037 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:00.037 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:00.037 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:00.037 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.037 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.037 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.037 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:00.037 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:00.037 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:00.037 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.037 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.037 [2024-10-15 09:07:17.807523] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:00.299 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.299 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:00.299 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:00.299 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:00.299 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:00.299 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:00.299 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:00.299 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.299 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:00.299 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:00.299 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:00.299 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:00.299 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.299 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.299 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.299 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.299 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.299 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.299 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.299 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.299 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.299 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.299 "name": "Existed_Raid", 00:08:00.299 "uuid": "57324ab5-5e26-440d-b407-9b577b77a9cd", 00:08:00.299 "strip_size_kb": 0, 00:08:00.299 "state": "online", 00:08:00.299 "raid_level": "raid1", 00:08:00.299 "superblock": true, 00:08:00.299 "num_base_bdevs": 2, 00:08:00.299 "num_base_bdevs_discovered": 1, 00:08:00.299 "num_base_bdevs_operational": 1, 00:08:00.299 "base_bdevs_list": [ 00:08:00.299 { 00:08:00.299 "name": null, 00:08:00.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.299 "is_configured": false, 00:08:00.299 "data_offset": 0, 00:08:00.299 "data_size": 63488 00:08:00.299 }, 00:08:00.299 { 00:08:00.299 "name": "BaseBdev2", 00:08:00.299 "uuid": "f1696fdb-7af8-4eaf-9f13-27db34d12617", 00:08:00.299 "is_configured": true, 00:08:00.299 "data_offset": 2048, 00:08:00.299 "data_size": 63488 00:08:00.299 } 00:08:00.299 ] 00:08:00.299 }' 00:08:00.299 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.299 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.558 09:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:00.558 09:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:00.558 09:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:00.558 09:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.558 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.558 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.558 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.558 09:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:00.558 09:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:00.558 09:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:00.558 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.558 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.558 [2024-10-15 09:07:18.410198] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:00.558 [2024-10-15 09:07:18.410330] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:00.817 [2024-10-15 09:07:18.527329] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:00.817 [2024-10-15 09:07:18.527405] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:00.817 [2024-10-15 09:07:18.527420] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:00.817 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.817 09:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:00.817 09:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:00.817 09:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.817 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.817 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.817 09:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:00.817 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.817 09:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:00.817 09:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:00.817 09:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:00.817 09:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62998 00:08:00.817 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 62998 ']' 00:08:00.817 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 62998 00:08:00.817 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:00.817 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:00.817 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62998 00:08:00.817 killing process with pid 62998 00:08:00.817 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:00.817 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:00.817 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62998' 00:08:00.817 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 62998 00:08:00.817 [2024-10-15 09:07:18.616333] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:00.817 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 62998 00:08:00.817 [2024-10-15 09:07:18.637094] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:02.199 09:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:02.199 00:08:02.199 real 0m5.486s 00:08:02.199 user 0m7.866s 00:08:02.199 sys 0m0.821s 00:08:02.199 09:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.199 09:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.199 ************************************ 00:08:02.199 END TEST raid_state_function_test_sb 00:08:02.199 ************************************ 00:08:02.199 09:07:19 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:02.199 09:07:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:02.199 09:07:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.199 09:07:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:02.199 ************************************ 00:08:02.199 START TEST raid_superblock_test 00:08:02.199 ************************************ 00:08:02.199 09:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:08:02.199 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:02.199 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:02.199 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:02.199 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:02.199 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:02.199 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:02.199 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:02.199 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:02.199 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:02.199 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:02.199 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:02.199 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:02.199 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:02.199 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:02.199 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:02.199 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:02.199 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63254 00:08:02.199 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63254 00:08:02.199 09:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 63254 ']' 00:08:02.199 09:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.199 09:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:02.199 09:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.199 09:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:02.199 09:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.458 [2024-10-15 09:07:20.123670] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:08:02.458 [2024-10-15 09:07:20.123839] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63254 ] 00:08:02.458 [2024-10-15 09:07:20.293216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.716 [2024-10-15 09:07:20.431190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.036 [2024-10-15 09:07:20.673062] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:03.036 [2024-10-15 09:07:20.673121] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:03.294 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:03.294 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:03.294 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:03.294 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:03.294 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:03.294 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:03.294 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:03.294 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:03.294 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:03.294 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:03.294 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:03.294 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.294 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.294 malloc1 00:08:03.294 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.294 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:03.294 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.294 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.294 [2024-10-15 09:07:21.134340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:03.294 [2024-10-15 09:07:21.134462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:03.294 [2024-10-15 09:07:21.134504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:03.294 [2024-10-15 09:07:21.134521] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:03.294 [2024-10-15 09:07:21.137290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:03.294 [2024-10-15 09:07:21.137436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:03.294 pt1 00:08:03.294 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.294 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:03.294 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:03.294 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:03.294 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:03.294 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:03.294 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:03.294 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:03.294 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:03.294 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:03.294 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.294 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.555 malloc2 00:08:03.555 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.555 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:03.555 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.555 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.555 [2024-10-15 09:07:21.194498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:03.555 [2024-10-15 09:07:21.194619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:03.555 [2024-10-15 09:07:21.194663] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:03.555 [2024-10-15 09:07:21.194680] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:03.555 [2024-10-15 09:07:21.197642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:03.555 [2024-10-15 09:07:21.197736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:03.555 pt2 00:08:03.555 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.555 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:03.555 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:03.555 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:03.555 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.555 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.555 [2024-10-15 09:07:21.206833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:03.555 [2024-10-15 09:07:21.209846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:03.555 [2024-10-15 09:07:21.210176] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:03.555 [2024-10-15 09:07:21.210208] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:03.555 [2024-10-15 09:07:21.210660] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:03.555 [2024-10-15 09:07:21.210946] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:03.555 [2024-10-15 09:07:21.211061] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:03.555 [2024-10-15 09:07:21.211418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:03.555 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.555 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:03.555 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:03.555 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:03.555 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:03.555 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:03.555 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.555 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.555 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.555 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.555 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.555 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:03.555 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.555 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.555 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.555 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.555 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.555 "name": "raid_bdev1", 00:08:03.555 "uuid": "988539f1-b478-45d5-805f-9246c0c641b0", 00:08:03.555 "strip_size_kb": 0, 00:08:03.555 "state": "online", 00:08:03.555 "raid_level": "raid1", 00:08:03.555 "superblock": true, 00:08:03.555 "num_base_bdevs": 2, 00:08:03.555 "num_base_bdevs_discovered": 2, 00:08:03.555 "num_base_bdevs_operational": 2, 00:08:03.555 "base_bdevs_list": [ 00:08:03.555 { 00:08:03.555 "name": "pt1", 00:08:03.555 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:03.555 "is_configured": true, 00:08:03.555 "data_offset": 2048, 00:08:03.555 "data_size": 63488 00:08:03.555 }, 00:08:03.555 { 00:08:03.555 "name": "pt2", 00:08:03.555 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:03.555 "is_configured": true, 00:08:03.555 "data_offset": 2048, 00:08:03.555 "data_size": 63488 00:08:03.555 } 00:08:03.555 ] 00:08:03.555 }' 00:08:03.555 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.555 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.814 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:03.814 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:03.814 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:03.814 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:03.814 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:03.814 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:03.814 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:03.814 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:03.814 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.814 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.814 [2024-10-15 09:07:21.663225] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:03.814 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.814 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:03.814 "name": "raid_bdev1", 00:08:03.814 "aliases": [ 00:08:03.814 "988539f1-b478-45d5-805f-9246c0c641b0" 00:08:03.814 ], 00:08:03.814 "product_name": "Raid Volume", 00:08:03.814 "block_size": 512, 00:08:03.814 "num_blocks": 63488, 00:08:03.814 "uuid": "988539f1-b478-45d5-805f-9246c0c641b0", 00:08:03.814 "assigned_rate_limits": { 00:08:03.814 "rw_ios_per_sec": 0, 00:08:03.814 "rw_mbytes_per_sec": 0, 00:08:03.814 "r_mbytes_per_sec": 0, 00:08:03.814 "w_mbytes_per_sec": 0 00:08:03.814 }, 00:08:03.814 "claimed": false, 00:08:03.814 "zoned": false, 00:08:03.814 "supported_io_types": { 00:08:03.814 "read": true, 00:08:03.814 "write": true, 00:08:03.814 "unmap": false, 00:08:03.814 "flush": false, 00:08:03.814 "reset": true, 00:08:03.814 "nvme_admin": false, 00:08:03.814 "nvme_io": false, 00:08:03.814 "nvme_io_md": false, 00:08:03.814 "write_zeroes": true, 00:08:03.814 "zcopy": false, 00:08:03.814 "get_zone_info": false, 00:08:03.814 "zone_management": false, 00:08:03.814 "zone_append": false, 00:08:03.814 "compare": false, 00:08:03.814 "compare_and_write": false, 00:08:03.814 "abort": false, 00:08:03.814 "seek_hole": false, 00:08:03.814 "seek_data": false, 00:08:03.814 "copy": false, 00:08:03.814 "nvme_iov_md": false 00:08:03.814 }, 00:08:03.814 "memory_domains": [ 00:08:03.814 { 00:08:03.814 "dma_device_id": "system", 00:08:03.814 "dma_device_type": 1 00:08:03.814 }, 00:08:03.814 { 00:08:03.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.814 "dma_device_type": 2 00:08:03.814 }, 00:08:03.814 { 00:08:03.814 "dma_device_id": "system", 00:08:03.814 "dma_device_type": 1 00:08:03.814 }, 00:08:03.814 { 00:08:03.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.814 "dma_device_type": 2 00:08:03.814 } 00:08:03.814 ], 00:08:03.814 "driver_specific": { 00:08:03.814 "raid": { 00:08:03.814 "uuid": "988539f1-b478-45d5-805f-9246c0c641b0", 00:08:03.814 "strip_size_kb": 0, 00:08:03.814 "state": "online", 00:08:03.814 "raid_level": "raid1", 00:08:03.814 "superblock": true, 00:08:03.814 "num_base_bdevs": 2, 00:08:03.814 "num_base_bdevs_discovered": 2, 00:08:03.814 "num_base_bdevs_operational": 2, 00:08:03.815 "base_bdevs_list": [ 00:08:03.815 { 00:08:03.815 "name": "pt1", 00:08:03.815 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:03.815 "is_configured": true, 00:08:03.815 "data_offset": 2048, 00:08:03.815 "data_size": 63488 00:08:03.815 }, 00:08:03.815 { 00:08:03.815 "name": "pt2", 00:08:03.815 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:03.815 "is_configured": true, 00:08:03.815 "data_offset": 2048, 00:08:03.815 "data_size": 63488 00:08:03.815 } 00:08:03.815 ] 00:08:03.815 } 00:08:03.815 } 00:08:03.815 }' 00:08:03.815 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:04.075 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:04.075 pt2' 00:08:04.075 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.075 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:04.075 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.075 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:04.075 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.075 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.075 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.075 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.075 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.075 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.075 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.075 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:04.075 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.075 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.075 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.075 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.075 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.075 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.075 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:04.075 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:04.075 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.075 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.075 [2024-10-15 09:07:21.919235] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:04.075 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.075 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=988539f1-b478-45d5-805f-9246c0c641b0 00:08:04.075 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 988539f1-b478-45d5-805f-9246c0c641b0 ']' 00:08:04.075 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:04.075 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.075 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.075 [2024-10-15 09:07:21.958889] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:04.075 [2024-10-15 09:07:21.959002] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:04.075 [2024-10-15 09:07:21.959147] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:04.075 [2024-10-15 09:07:21.959247] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:04.075 [2024-10-15 09:07:21.959321] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:04.075 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.075 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.075 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.075 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.075 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:04.334 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.334 [2024-10-15 09:07:22.090794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:04.334 [2024-10-15 09:07:22.093062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:04.334 [2024-10-15 09:07:22.093246] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:04.334 [2024-10-15 09:07:22.093329] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:04.334 [2024-10-15 09:07:22.093349] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:04.334 [2024-10-15 09:07:22.093363] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:04.334 request: 00:08:04.334 { 00:08:04.334 "name": "raid_bdev1", 00:08:04.334 "raid_level": "raid1", 00:08:04.334 "base_bdevs": [ 00:08:04.334 "malloc1", 00:08:04.334 "malloc2" 00:08:04.334 ], 00:08:04.334 "superblock": false, 00:08:04.334 "method": "bdev_raid_create", 00:08:04.334 "req_id": 1 00:08:04.334 } 00:08:04.334 Got JSON-RPC error response 00:08:04.334 response: 00:08:04.334 { 00:08:04.334 "code": -17, 00:08:04.334 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:04.334 } 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.334 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.335 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:04.335 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.335 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:04.335 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:04.335 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:04.335 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.335 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.335 [2024-10-15 09:07:22.146638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:04.335 [2024-10-15 09:07:22.146750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.335 [2024-10-15 09:07:22.146794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:04.335 [2024-10-15 09:07:22.146808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.335 [2024-10-15 09:07:22.149496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.335 [2024-10-15 09:07:22.149563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:04.335 [2024-10-15 09:07:22.149680] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:04.335 [2024-10-15 09:07:22.149781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:04.335 pt1 00:08:04.335 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.335 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:04.335 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:04.335 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.335 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:04.335 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:04.335 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.335 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.335 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.335 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.335 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.335 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.335 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.335 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.335 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:04.335 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.335 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.335 "name": "raid_bdev1", 00:08:04.335 "uuid": "988539f1-b478-45d5-805f-9246c0c641b0", 00:08:04.335 "strip_size_kb": 0, 00:08:04.335 "state": "configuring", 00:08:04.335 "raid_level": "raid1", 00:08:04.335 "superblock": true, 00:08:04.335 "num_base_bdevs": 2, 00:08:04.335 "num_base_bdevs_discovered": 1, 00:08:04.335 "num_base_bdevs_operational": 2, 00:08:04.335 "base_bdevs_list": [ 00:08:04.335 { 00:08:04.335 "name": "pt1", 00:08:04.335 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:04.335 "is_configured": true, 00:08:04.335 "data_offset": 2048, 00:08:04.335 "data_size": 63488 00:08:04.335 }, 00:08:04.335 { 00:08:04.335 "name": null, 00:08:04.335 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:04.335 "is_configured": false, 00:08:04.335 "data_offset": 2048, 00:08:04.335 "data_size": 63488 00:08:04.335 } 00:08:04.335 ] 00:08:04.335 }' 00:08:04.335 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.335 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.903 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:04.903 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:04.903 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:04.903 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:04.903 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.903 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.903 [2024-10-15 09:07:22.637882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:04.903 [2024-10-15 09:07:22.637988] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.903 [2024-10-15 09:07:22.638016] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:04.903 [2024-10-15 09:07:22.638030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.903 [2024-10-15 09:07:22.638650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.903 [2024-10-15 09:07:22.638718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:04.903 [2024-10-15 09:07:22.638836] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:04.903 [2024-10-15 09:07:22.638870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:04.903 [2024-10-15 09:07:22.639035] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:04.903 [2024-10-15 09:07:22.639058] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:04.903 [2024-10-15 09:07:22.639355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:04.903 [2024-10-15 09:07:22.639558] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:04.903 [2024-10-15 09:07:22.639570] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:04.903 [2024-10-15 09:07:22.639783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.903 pt2 00:08:04.903 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.903 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:04.903 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:04.903 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:04.903 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:04.903 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.903 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:04.903 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:04.903 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.903 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.903 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.903 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.903 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.903 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.903 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.903 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:04.903 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.903 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.903 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.903 "name": "raid_bdev1", 00:08:04.903 "uuid": "988539f1-b478-45d5-805f-9246c0c641b0", 00:08:04.903 "strip_size_kb": 0, 00:08:04.903 "state": "online", 00:08:04.903 "raid_level": "raid1", 00:08:04.903 "superblock": true, 00:08:04.903 "num_base_bdevs": 2, 00:08:04.903 "num_base_bdevs_discovered": 2, 00:08:04.903 "num_base_bdevs_operational": 2, 00:08:04.903 "base_bdevs_list": [ 00:08:04.903 { 00:08:04.903 "name": "pt1", 00:08:04.903 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:04.903 "is_configured": true, 00:08:04.903 "data_offset": 2048, 00:08:04.903 "data_size": 63488 00:08:04.903 }, 00:08:04.903 { 00:08:04.903 "name": "pt2", 00:08:04.903 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:04.903 "is_configured": true, 00:08:04.903 "data_offset": 2048, 00:08:04.903 "data_size": 63488 00:08:04.903 } 00:08:04.903 ] 00:08:04.903 }' 00:08:04.903 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.903 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.473 [2024-10-15 09:07:23.081780] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:05.473 "name": "raid_bdev1", 00:08:05.473 "aliases": [ 00:08:05.473 "988539f1-b478-45d5-805f-9246c0c641b0" 00:08:05.473 ], 00:08:05.473 "product_name": "Raid Volume", 00:08:05.473 "block_size": 512, 00:08:05.473 "num_blocks": 63488, 00:08:05.473 "uuid": "988539f1-b478-45d5-805f-9246c0c641b0", 00:08:05.473 "assigned_rate_limits": { 00:08:05.473 "rw_ios_per_sec": 0, 00:08:05.473 "rw_mbytes_per_sec": 0, 00:08:05.473 "r_mbytes_per_sec": 0, 00:08:05.473 "w_mbytes_per_sec": 0 00:08:05.473 }, 00:08:05.473 "claimed": false, 00:08:05.473 "zoned": false, 00:08:05.473 "supported_io_types": { 00:08:05.473 "read": true, 00:08:05.473 "write": true, 00:08:05.473 "unmap": false, 00:08:05.473 "flush": false, 00:08:05.473 "reset": true, 00:08:05.473 "nvme_admin": false, 00:08:05.473 "nvme_io": false, 00:08:05.473 "nvme_io_md": false, 00:08:05.473 "write_zeroes": true, 00:08:05.473 "zcopy": false, 00:08:05.473 "get_zone_info": false, 00:08:05.473 "zone_management": false, 00:08:05.473 "zone_append": false, 00:08:05.473 "compare": false, 00:08:05.473 "compare_and_write": false, 00:08:05.473 "abort": false, 00:08:05.473 "seek_hole": false, 00:08:05.473 "seek_data": false, 00:08:05.473 "copy": false, 00:08:05.473 "nvme_iov_md": false 00:08:05.473 }, 00:08:05.473 "memory_domains": [ 00:08:05.473 { 00:08:05.473 "dma_device_id": "system", 00:08:05.473 "dma_device_type": 1 00:08:05.473 }, 00:08:05.473 { 00:08:05.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.473 "dma_device_type": 2 00:08:05.473 }, 00:08:05.473 { 00:08:05.473 "dma_device_id": "system", 00:08:05.473 "dma_device_type": 1 00:08:05.473 }, 00:08:05.473 { 00:08:05.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.473 "dma_device_type": 2 00:08:05.473 } 00:08:05.473 ], 00:08:05.473 "driver_specific": { 00:08:05.473 "raid": { 00:08:05.473 "uuid": "988539f1-b478-45d5-805f-9246c0c641b0", 00:08:05.473 "strip_size_kb": 0, 00:08:05.473 "state": "online", 00:08:05.473 "raid_level": "raid1", 00:08:05.473 "superblock": true, 00:08:05.473 "num_base_bdevs": 2, 00:08:05.473 "num_base_bdevs_discovered": 2, 00:08:05.473 "num_base_bdevs_operational": 2, 00:08:05.473 "base_bdevs_list": [ 00:08:05.473 { 00:08:05.473 "name": "pt1", 00:08:05.473 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:05.473 "is_configured": true, 00:08:05.473 "data_offset": 2048, 00:08:05.473 "data_size": 63488 00:08:05.473 }, 00:08:05.473 { 00:08:05.473 "name": "pt2", 00:08:05.473 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:05.473 "is_configured": true, 00:08:05.473 "data_offset": 2048, 00:08:05.473 "data_size": 63488 00:08:05.473 } 00:08:05.473 ] 00:08:05.473 } 00:08:05.473 } 00:08:05.473 }' 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:05.473 pt2' 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.473 [2024-10-15 09:07:23.329796] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:05.473 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.733 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 988539f1-b478-45d5-805f-9246c0c641b0 '!=' 988539f1-b478-45d5-805f-9246c0c641b0 ']' 00:08:05.733 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:05.733 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:05.733 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:05.733 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:05.733 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.733 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.733 [2024-10-15 09:07:23.377565] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:05.733 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.733 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:05.733 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:05.733 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:05.733 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:05.733 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:05.733 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:05.733 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.733 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.733 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.733 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.733 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.733 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:05.733 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.733 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.733 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.734 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.734 "name": "raid_bdev1", 00:08:05.734 "uuid": "988539f1-b478-45d5-805f-9246c0c641b0", 00:08:05.734 "strip_size_kb": 0, 00:08:05.734 "state": "online", 00:08:05.734 "raid_level": "raid1", 00:08:05.734 "superblock": true, 00:08:05.734 "num_base_bdevs": 2, 00:08:05.734 "num_base_bdevs_discovered": 1, 00:08:05.734 "num_base_bdevs_operational": 1, 00:08:05.734 "base_bdevs_list": [ 00:08:05.734 { 00:08:05.734 "name": null, 00:08:05.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.734 "is_configured": false, 00:08:05.734 "data_offset": 0, 00:08:05.734 "data_size": 63488 00:08:05.734 }, 00:08:05.734 { 00:08:05.734 "name": "pt2", 00:08:05.734 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:05.734 "is_configured": true, 00:08:05.734 "data_offset": 2048, 00:08:05.734 "data_size": 63488 00:08:05.734 } 00:08:05.734 ] 00:08:05.734 }' 00:08:05.734 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.734 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.302 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:06.302 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.302 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.302 [2024-10-15 09:07:23.917500] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:06.302 [2024-10-15 09:07:23.917549] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:06.302 [2024-10-15 09:07:23.917657] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:06.302 [2024-10-15 09:07:23.917728] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:06.302 [2024-10-15 09:07:23.917744] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:06.302 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.302 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.302 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:06.302 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.302 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.302 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.302 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:06.302 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:06.302 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:06.302 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:06.302 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:06.302 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.302 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.302 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.302 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:06.302 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:06.302 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:06.303 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:06.303 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:06.303 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:06.303 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.303 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.303 [2024-10-15 09:07:23.993470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:06.303 [2024-10-15 09:07:23.993568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:06.303 [2024-10-15 09:07:23.993591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:06.303 [2024-10-15 09:07:23.993604] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:06.303 [2024-10-15 09:07:23.996261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:06.303 [2024-10-15 09:07:23.996323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:06.303 [2024-10-15 09:07:23.996442] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:06.303 [2024-10-15 09:07:23.996500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:06.303 [2024-10-15 09:07:23.996630] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:06.303 [2024-10-15 09:07:23.996654] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:06.303 [2024-10-15 09:07:23.996977] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:06.303 [2024-10-15 09:07:23.997171] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:06.303 [2024-10-15 09:07:23.997182] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:06.303 [2024-10-15 09:07:23.997450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:06.303 pt2 00:08:06.303 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.303 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:06.303 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:06.303 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:06.303 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:06.303 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:06.303 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:06.303 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.303 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.303 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.303 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.303 09:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.303 09:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.303 09:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:06.303 09:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.303 09:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.303 09:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.303 "name": "raid_bdev1", 00:08:06.303 "uuid": "988539f1-b478-45d5-805f-9246c0c641b0", 00:08:06.303 "strip_size_kb": 0, 00:08:06.303 "state": "online", 00:08:06.303 "raid_level": "raid1", 00:08:06.303 "superblock": true, 00:08:06.303 "num_base_bdevs": 2, 00:08:06.303 "num_base_bdevs_discovered": 1, 00:08:06.303 "num_base_bdevs_operational": 1, 00:08:06.303 "base_bdevs_list": [ 00:08:06.303 { 00:08:06.303 "name": null, 00:08:06.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.303 "is_configured": false, 00:08:06.303 "data_offset": 2048, 00:08:06.303 "data_size": 63488 00:08:06.303 }, 00:08:06.303 { 00:08:06.303 "name": "pt2", 00:08:06.303 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:06.303 "is_configured": true, 00:08:06.303 "data_offset": 2048, 00:08:06.303 "data_size": 63488 00:08:06.303 } 00:08:06.303 ] 00:08:06.303 }' 00:08:06.303 09:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.303 09:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.562 09:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:06.562 09:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.562 09:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.562 [2024-10-15 09:07:24.445437] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:06.562 [2024-10-15 09:07:24.445570] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:06.562 [2024-10-15 09:07:24.445676] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:06.562 [2024-10-15 09:07:24.445760] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:06.562 [2024-10-15 09:07:24.445773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:06.562 09:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.562 09:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:06.562 09:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.562 09:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.562 09:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.822 09:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.822 09:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:06.822 09:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:06.822 09:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:06.822 09:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:06.822 09:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.822 09:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.822 [2024-10-15 09:07:24.493482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:06.822 [2024-10-15 09:07:24.493658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:06.822 [2024-10-15 09:07:24.493716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:06.822 [2024-10-15 09:07:24.493753] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:06.822 [2024-10-15 09:07:24.496421] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:06.822 [2024-10-15 09:07:24.496535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:06.822 [2024-10-15 09:07:24.496700] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:06.822 [2024-10-15 09:07:24.496794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:06.822 [2024-10-15 09:07:24.496991] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:06.822 [2024-10-15 09:07:24.497051] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:06.822 [2024-10-15 09:07:24.497110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:06.822 [2024-10-15 09:07:24.497237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:06.822 [2024-10-15 09:07:24.497397] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:06.822 [2024-10-15 09:07:24.497441] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:06.822 [2024-10-15 09:07:24.497784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:06.822 [2024-10-15 09:07:24.498005] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:06.822 [2024-10-15 09:07:24.498026] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:06.822 [2024-10-15 09:07:24.498253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:06.822 pt1 00:08:06.822 09:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.822 09:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:06.822 09:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:06.822 09:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:06.822 09:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:06.822 09:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:06.822 09:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:06.822 09:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:06.822 09:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.822 09:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.822 09:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.822 09:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.822 09:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:06.822 09:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.822 09:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.822 09:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.822 09:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.822 09:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.822 "name": "raid_bdev1", 00:08:06.822 "uuid": "988539f1-b478-45d5-805f-9246c0c641b0", 00:08:06.822 "strip_size_kb": 0, 00:08:06.822 "state": "online", 00:08:06.822 "raid_level": "raid1", 00:08:06.822 "superblock": true, 00:08:06.822 "num_base_bdevs": 2, 00:08:06.822 "num_base_bdevs_discovered": 1, 00:08:06.822 "num_base_bdevs_operational": 1, 00:08:06.822 "base_bdevs_list": [ 00:08:06.822 { 00:08:06.822 "name": null, 00:08:06.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.822 "is_configured": false, 00:08:06.822 "data_offset": 2048, 00:08:06.822 "data_size": 63488 00:08:06.822 }, 00:08:06.822 { 00:08:06.822 "name": "pt2", 00:08:06.823 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:06.823 "is_configured": true, 00:08:06.823 "data_offset": 2048, 00:08:06.823 "data_size": 63488 00:08:06.823 } 00:08:06.823 ] 00:08:06.823 }' 00:08:06.823 09:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.823 09:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.080 09:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:07.080 09:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:07.080 09:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.080 09:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.080 09:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.338 09:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:07.338 09:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:07.338 09:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:07.338 09:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.338 09:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.338 [2024-10-15 09:07:24.993734] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:07.338 09:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.338 09:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 988539f1-b478-45d5-805f-9246c0c641b0 '!=' 988539f1-b478-45d5-805f-9246c0c641b0 ']' 00:08:07.338 09:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63254 00:08:07.338 09:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 63254 ']' 00:08:07.338 09:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 63254 00:08:07.338 09:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:07.338 09:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:07.338 09:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63254 00:08:07.338 09:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:07.338 09:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:07.338 09:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63254' 00:08:07.338 killing process with pid 63254 00:08:07.338 09:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 63254 00:08:07.338 [2024-10-15 09:07:25.077792] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:07.338 [2024-10-15 09:07:25.077919] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:07.338 [2024-10-15 09:07:25.077979] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:07.338 [2024-10-15 09:07:25.077996] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:08:07.338 09:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 63254 00:08:07.596 [2024-10-15 09:07:25.326188] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:08.969 09:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:08.970 00:08:08.970 real 0m6.611s 00:08:08.970 user 0m10.021s 00:08:08.970 sys 0m1.058s 00:08:08.970 09:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.970 09:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.970 ************************************ 00:08:08.970 END TEST raid_superblock_test 00:08:08.970 ************************************ 00:08:08.970 09:07:26 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:08.970 09:07:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:08.970 09:07:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.970 09:07:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:08.970 ************************************ 00:08:08.970 START TEST raid_read_error_test 00:08:08.970 ************************************ 00:08:08.970 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:08:08.970 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:08.970 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:08.970 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:08.970 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:08.970 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:08.970 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:08.970 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:08.970 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:08.970 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:08.970 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:08.970 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:08.970 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:08.970 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:08.970 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:08.970 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:08.970 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:08.970 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:08.970 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:08.970 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:08.970 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:08.970 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:08.970 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.sMnxWzYDWs 00:08:08.970 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63591 00:08:08.970 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63591 00:08:08.970 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:08.970 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 63591 ']' 00:08:08.970 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.970 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:08.970 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.970 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:08.970 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.970 [2024-10-15 09:07:26.811234] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:08:08.970 [2024-10-15 09:07:26.811479] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63591 ] 00:08:09.227 [2024-10-15 09:07:26.982656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.227 [2024-10-15 09:07:27.121779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.486 [2024-10-15 09:07:27.358524] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:09.486 [2024-10-15 09:07:27.358713] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.054 BaseBdev1_malloc 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.054 true 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.054 [2024-10-15 09:07:27.826319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:10.054 [2024-10-15 09:07:27.826399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:10.054 [2024-10-15 09:07:27.826427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:10.054 [2024-10-15 09:07:27.826451] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:10.054 [2024-10-15 09:07:27.828946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:10.054 [2024-10-15 09:07:27.828992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:10.054 BaseBdev1 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.054 BaseBdev2_malloc 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.054 true 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.054 [2024-10-15 09:07:27.892196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:10.054 [2024-10-15 09:07:27.892287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:10.054 [2024-10-15 09:07:27.892312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:10.054 [2024-10-15 09:07:27.892325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:10.054 [2024-10-15 09:07:27.894993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:10.054 [2024-10-15 09:07:27.895049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:10.054 BaseBdev2 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.054 [2024-10-15 09:07:27.904253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:10.054 [2024-10-15 09:07:27.906545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:10.054 [2024-10-15 09:07:27.906819] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:10.054 [2024-10-15 09:07:27.906846] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:10.054 [2024-10-15 09:07:27.907157] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:10.054 [2024-10-15 09:07:27.907365] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:10.054 [2024-10-15 09:07:27.907377] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:10.054 [2024-10-15 09:07:27.907598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:10.054 09:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:10.055 09:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:10.055 09:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.055 09:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.055 09:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.055 09:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.055 09:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:10.055 09:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.055 09:07:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.055 09:07:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.055 09:07:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.313 09:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.313 "name": "raid_bdev1", 00:08:10.313 "uuid": "834e3657-1806-44ca-8e3d-de1e4f0709ac", 00:08:10.313 "strip_size_kb": 0, 00:08:10.313 "state": "online", 00:08:10.313 "raid_level": "raid1", 00:08:10.313 "superblock": true, 00:08:10.313 "num_base_bdevs": 2, 00:08:10.313 "num_base_bdevs_discovered": 2, 00:08:10.313 "num_base_bdevs_operational": 2, 00:08:10.313 "base_bdevs_list": [ 00:08:10.313 { 00:08:10.313 "name": "BaseBdev1", 00:08:10.313 "uuid": "3a424582-318b-5bd5-8964-5452b9648556", 00:08:10.313 "is_configured": true, 00:08:10.313 "data_offset": 2048, 00:08:10.313 "data_size": 63488 00:08:10.313 }, 00:08:10.313 { 00:08:10.313 "name": "BaseBdev2", 00:08:10.313 "uuid": "3a65ac7c-1a9f-5502-92a0-4a0887a3f692", 00:08:10.313 "is_configured": true, 00:08:10.313 "data_offset": 2048, 00:08:10.313 "data_size": 63488 00:08:10.313 } 00:08:10.313 ] 00:08:10.313 }' 00:08:10.313 09:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.313 09:07:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.573 09:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:10.573 09:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:10.573 [2024-10-15 09:07:28.452545] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:11.511 09:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:11.511 09:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.511 09:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.511 09:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.511 09:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:11.511 09:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:11.511 09:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:11.511 09:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:11.511 09:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:11.511 09:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:11.511 09:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:11.511 09:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:11.511 09:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:11.511 09:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:11.511 09:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.511 09:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.511 09:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.511 09:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.511 09:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.511 09:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:11.511 09:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.511 09:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.511 09:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.511 09:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.511 "name": "raid_bdev1", 00:08:11.511 "uuid": "834e3657-1806-44ca-8e3d-de1e4f0709ac", 00:08:11.511 "strip_size_kb": 0, 00:08:11.511 "state": "online", 00:08:11.511 "raid_level": "raid1", 00:08:11.511 "superblock": true, 00:08:11.511 "num_base_bdevs": 2, 00:08:11.511 "num_base_bdevs_discovered": 2, 00:08:11.511 "num_base_bdevs_operational": 2, 00:08:11.511 "base_bdevs_list": [ 00:08:11.511 { 00:08:11.511 "name": "BaseBdev1", 00:08:11.511 "uuid": "3a424582-318b-5bd5-8964-5452b9648556", 00:08:11.511 "is_configured": true, 00:08:11.511 "data_offset": 2048, 00:08:11.511 "data_size": 63488 00:08:11.511 }, 00:08:11.511 { 00:08:11.511 "name": "BaseBdev2", 00:08:11.511 "uuid": "3a65ac7c-1a9f-5502-92a0-4a0887a3f692", 00:08:11.511 "is_configured": true, 00:08:11.511 "data_offset": 2048, 00:08:11.511 "data_size": 63488 00:08:11.511 } 00:08:11.511 ] 00:08:11.511 }' 00:08:11.511 09:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.511 09:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.076 09:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:12.076 09:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.076 09:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.076 [2024-10-15 09:07:29.805417] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:12.076 [2024-10-15 09:07:29.805525] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:12.076 [2024-10-15 09:07:29.808770] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:12.077 [2024-10-15 09:07:29.808880] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:12.077 [2024-10-15 09:07:29.809002] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:12.077 [2024-10-15 09:07:29.809022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:12.077 { 00:08:12.077 "results": [ 00:08:12.077 { 00:08:12.077 "job": "raid_bdev1", 00:08:12.077 "core_mask": "0x1", 00:08:12.077 "workload": "randrw", 00:08:12.077 "percentage": 50, 00:08:12.077 "status": "finished", 00:08:12.077 "queue_depth": 1, 00:08:12.077 "io_size": 131072, 00:08:12.077 "runtime": 1.353741, 00:08:12.077 "iops": 15468.24688031167, 00:08:12.077 "mibps": 1933.5308600389587, 00:08:12.077 "io_failed": 0, 00:08:12.077 "io_timeout": 0, 00:08:12.077 "avg_latency_us": 61.659189783244294, 00:08:12.077 "min_latency_us": 24.146724890829695, 00:08:12.077 "max_latency_us": 1760.0279475982534 00:08:12.077 } 00:08:12.077 ], 00:08:12.077 "core_count": 1 00:08:12.077 } 00:08:12.077 09:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.077 09:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63591 00:08:12.077 09:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 63591 ']' 00:08:12.077 09:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 63591 00:08:12.077 09:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:12.077 09:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:12.077 09:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63591 00:08:12.077 09:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:12.077 09:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:12.077 09:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63591' 00:08:12.077 killing process with pid 63591 00:08:12.077 09:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 63591 00:08:12.077 09:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 63591 00:08:12.077 [2024-10-15 09:07:29.850628] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:12.336 [2024-10-15 09:07:30.001031] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:13.714 09:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.sMnxWzYDWs 00:08:13.714 09:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:13.714 09:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:13.714 09:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:13.714 09:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:13.714 ************************************ 00:08:13.714 END TEST raid_read_error_test 00:08:13.714 ************************************ 00:08:13.714 09:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:13.714 09:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:13.714 09:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:13.714 00:08:13.714 real 0m4.575s 00:08:13.714 user 0m5.556s 00:08:13.714 sys 0m0.541s 00:08:13.714 09:07:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.714 09:07:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.714 09:07:31 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:13.714 09:07:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:13.714 09:07:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.715 09:07:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:13.715 ************************************ 00:08:13.715 START TEST raid_write_error_test 00:08:13.715 ************************************ 00:08:13.715 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:08:13.715 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:13.715 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:13.715 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:13.715 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:13.715 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:13.715 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:13.715 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:13.715 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:13.715 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:13.715 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:13.715 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:13.715 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:13.715 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:13.715 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:13.715 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:13.715 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:13.715 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:13.715 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:13.715 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:13.715 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:13.715 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:13.715 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fsEOwpXdYS 00:08:13.715 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63737 00:08:13.715 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63737 00:08:13.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.715 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 63737 ']' 00:08:13.715 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.715 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:13.715 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.715 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:13.715 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:13.715 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.715 [2024-10-15 09:07:31.440444] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:08:13.715 [2024-10-15 09:07:31.440675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63737 ] 00:08:13.715 [2024-10-15 09:07:31.596284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.973 [2024-10-15 09:07:31.735910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.233 [2024-10-15 09:07:31.973356] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.233 [2024-10-15 09:07:31.973419] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.512 09:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:14.512 09:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:14.512 09:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:14.512 09:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:14.512 09:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.512 09:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.512 BaseBdev1_malloc 00:08:14.512 09:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.512 09:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:14.512 09:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.512 09:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.512 true 00:08:14.512 09:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.512 09:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:14.512 09:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.512 09:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.770 [2024-10-15 09:07:32.411990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:14.770 [2024-10-15 09:07:32.412051] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.770 [2024-10-15 09:07:32.412074] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:14.770 [2024-10-15 09:07:32.412086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.770 [2024-10-15 09:07:32.414279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.770 [2024-10-15 09:07:32.414390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:14.770 BaseBdev1 00:08:14.770 09:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.770 09:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:14.770 09:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:14.770 09:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.770 09:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.770 BaseBdev2_malloc 00:08:14.770 09:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.771 09:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:14.771 09:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.771 09:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.771 true 00:08:14.771 09:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.771 09:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:14.771 09:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.771 09:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.771 [2024-10-15 09:07:32.480076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:14.771 [2024-10-15 09:07:32.480196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.771 [2024-10-15 09:07:32.480238] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:14.771 [2024-10-15 09:07:32.480251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.771 [2024-10-15 09:07:32.482723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.771 [2024-10-15 09:07:32.482766] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:14.771 BaseBdev2 00:08:14.771 09:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.771 09:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:14.771 09:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.771 09:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.771 [2024-10-15 09:07:32.492136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:14.771 [2024-10-15 09:07:32.494277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:14.771 [2024-10-15 09:07:32.494542] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:14.771 [2024-10-15 09:07:32.494560] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:14.771 [2024-10-15 09:07:32.494889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:14.771 [2024-10-15 09:07:32.495105] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:14.771 [2024-10-15 09:07:32.495124] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:14.771 [2024-10-15 09:07:32.495353] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.771 09:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.771 09:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:14.771 09:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:14.771 09:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:14.771 09:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:14.771 09:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:14.771 09:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:14.771 09:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.771 09:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.771 09:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.771 09:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.771 09:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.771 09:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.771 09:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.771 09:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.771 09:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.771 09:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.771 "name": "raid_bdev1", 00:08:14.771 "uuid": "52c2f0bd-5eb0-4066-9f2a-91d8543c1450", 00:08:14.771 "strip_size_kb": 0, 00:08:14.771 "state": "online", 00:08:14.771 "raid_level": "raid1", 00:08:14.771 "superblock": true, 00:08:14.771 "num_base_bdevs": 2, 00:08:14.771 "num_base_bdevs_discovered": 2, 00:08:14.771 "num_base_bdevs_operational": 2, 00:08:14.771 "base_bdevs_list": [ 00:08:14.771 { 00:08:14.771 "name": "BaseBdev1", 00:08:14.771 "uuid": "fb556f79-a6ed-5c89-a10e-fe4e224575db", 00:08:14.771 "is_configured": true, 00:08:14.771 "data_offset": 2048, 00:08:14.771 "data_size": 63488 00:08:14.771 }, 00:08:14.771 { 00:08:14.771 "name": "BaseBdev2", 00:08:14.771 "uuid": "8bed5c2d-2c55-533b-81d9-84d6d9d757b4", 00:08:14.771 "is_configured": true, 00:08:14.771 "data_offset": 2048, 00:08:14.771 "data_size": 63488 00:08:14.771 } 00:08:14.771 ] 00:08:14.771 }' 00:08:14.771 09:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.771 09:07:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.030 09:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:15.030 09:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:15.290 [2024-10-15 09:07:33.020611] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:16.227 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:16.228 09:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.228 09:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.228 [2024-10-15 09:07:33.925778] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:16.228 [2024-10-15 09:07:33.925946] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:16.228 [2024-10-15 09:07:33.926257] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:08:16.228 09:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.228 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:16.228 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:16.228 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:16.228 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:16.228 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:16.228 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:16.228 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:16.228 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:16.228 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:16.228 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:16.228 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.228 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.228 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.228 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.228 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.228 09:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.228 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:16.228 09:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.228 09:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.228 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.228 "name": "raid_bdev1", 00:08:16.228 "uuid": "52c2f0bd-5eb0-4066-9f2a-91d8543c1450", 00:08:16.228 "strip_size_kb": 0, 00:08:16.228 "state": "online", 00:08:16.228 "raid_level": "raid1", 00:08:16.228 "superblock": true, 00:08:16.228 "num_base_bdevs": 2, 00:08:16.228 "num_base_bdevs_discovered": 1, 00:08:16.228 "num_base_bdevs_operational": 1, 00:08:16.228 "base_bdevs_list": [ 00:08:16.228 { 00:08:16.228 "name": null, 00:08:16.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.228 "is_configured": false, 00:08:16.228 "data_offset": 0, 00:08:16.228 "data_size": 63488 00:08:16.228 }, 00:08:16.228 { 00:08:16.228 "name": "BaseBdev2", 00:08:16.228 "uuid": "8bed5c2d-2c55-533b-81d9-84d6d9d757b4", 00:08:16.228 "is_configured": true, 00:08:16.228 "data_offset": 2048, 00:08:16.228 "data_size": 63488 00:08:16.228 } 00:08:16.228 ] 00:08:16.228 }' 00:08:16.228 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.228 09:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.487 09:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:16.487 09:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.487 09:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.487 [2024-10-15 09:07:34.327477] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:16.487 [2024-10-15 09:07:34.327515] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:16.487 [2024-10-15 09:07:34.330512] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:16.487 [2024-10-15 09:07:34.330632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.487 [2024-10-15 09:07:34.330738] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:16.488 [2024-10-15 09:07:34.330754] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:16.488 { 00:08:16.488 "results": [ 00:08:16.488 { 00:08:16.488 "job": "raid_bdev1", 00:08:16.488 "core_mask": "0x1", 00:08:16.488 "workload": "randrw", 00:08:16.488 "percentage": 50, 00:08:16.488 "status": "finished", 00:08:16.488 "queue_depth": 1, 00:08:16.488 "io_size": 131072, 00:08:16.488 "runtime": 1.307483, 00:08:16.488 "iops": 18029.297512854853, 00:08:16.488 "mibps": 2253.6621891068567, 00:08:16.488 "io_failed": 0, 00:08:16.488 "io_timeout": 0, 00:08:16.488 "avg_latency_us": 52.49614115179141, 00:08:16.488 "min_latency_us": 22.69344978165939, 00:08:16.488 "max_latency_us": 1624.0908296943232 00:08:16.488 } 00:08:16.488 ], 00:08:16.488 "core_count": 1 00:08:16.488 } 00:08:16.488 09:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.488 09:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63737 00:08:16.488 09:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 63737 ']' 00:08:16.488 09:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 63737 00:08:16.488 09:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:16.488 09:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:16.488 09:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63737 00:08:16.488 killing process with pid 63737 00:08:16.488 09:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:16.488 09:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:16.488 09:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63737' 00:08:16.488 09:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 63737 00:08:16.488 [2024-10-15 09:07:34.374486] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:16.488 09:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 63737 00:08:16.747 [2024-10-15 09:07:34.520183] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:18.126 09:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fsEOwpXdYS 00:08:18.126 09:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:18.126 09:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:18.126 09:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:18.126 09:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:18.126 09:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:18.126 09:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:18.126 ************************************ 00:08:18.126 END TEST raid_write_error_test 00:08:18.126 ************************************ 00:08:18.126 09:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:18.126 00:08:18.126 real 0m4.372s 00:08:18.126 user 0m5.234s 00:08:18.126 sys 0m0.540s 00:08:18.126 09:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.126 09:07:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.126 09:07:35 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:18.126 09:07:35 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:18.126 09:07:35 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:18.126 09:07:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:18.126 09:07:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.126 09:07:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:18.126 ************************************ 00:08:18.126 START TEST raid_state_function_test 00:08:18.126 ************************************ 00:08:18.126 09:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63875 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63875' 00:08:18.127 Process raid pid: 63875 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63875 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 63875 ']' 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:18.127 09:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.127 [2024-10-15 09:07:35.886971] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:08:18.127 [2024-10-15 09:07:35.887118] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.386 [2024-10-15 09:07:36.041247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.386 [2024-10-15 09:07:36.179784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.643 [2024-10-15 09:07:36.419215] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.643 [2024-10-15 09:07:36.419378] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:19.210 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:19.210 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:19.210 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:19.210 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.210 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.210 [2024-10-15 09:07:36.808021] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:19.210 [2024-10-15 09:07:36.808095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:19.210 [2024-10-15 09:07:36.808108] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:19.210 [2024-10-15 09:07:36.808136] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:19.210 [2024-10-15 09:07:36.808144] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:19.210 [2024-10-15 09:07:36.808155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:19.210 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.210 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:19.210 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.210 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.210 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.210 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.210 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.210 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.210 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.210 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.210 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.210 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.210 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.210 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.210 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.210 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.210 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.210 "name": "Existed_Raid", 00:08:19.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.210 "strip_size_kb": 64, 00:08:19.210 "state": "configuring", 00:08:19.210 "raid_level": "raid0", 00:08:19.210 "superblock": false, 00:08:19.210 "num_base_bdevs": 3, 00:08:19.210 "num_base_bdevs_discovered": 0, 00:08:19.210 "num_base_bdevs_operational": 3, 00:08:19.210 "base_bdevs_list": [ 00:08:19.210 { 00:08:19.210 "name": "BaseBdev1", 00:08:19.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.210 "is_configured": false, 00:08:19.210 "data_offset": 0, 00:08:19.210 "data_size": 0 00:08:19.210 }, 00:08:19.210 { 00:08:19.210 "name": "BaseBdev2", 00:08:19.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.210 "is_configured": false, 00:08:19.210 "data_offset": 0, 00:08:19.210 "data_size": 0 00:08:19.210 }, 00:08:19.210 { 00:08:19.210 "name": "BaseBdev3", 00:08:19.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.210 "is_configured": false, 00:08:19.210 "data_offset": 0, 00:08:19.210 "data_size": 0 00:08:19.210 } 00:08:19.210 ] 00:08:19.210 }' 00:08:19.210 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.210 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.469 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:19.469 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.469 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.469 [2024-10-15 09:07:37.287146] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:19.469 [2024-10-15 09:07:37.287263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:19.469 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.469 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:19.469 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.469 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.469 [2024-10-15 09:07:37.299191] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:19.469 [2024-10-15 09:07:37.299327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:19.469 [2024-10-15 09:07:37.299367] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:19.469 [2024-10-15 09:07:37.299405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:19.469 [2024-10-15 09:07:37.299437] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:19.469 [2024-10-15 09:07:37.299475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:19.469 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.469 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:19.469 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.469 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.469 [2024-10-15 09:07:37.352277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:19.469 BaseBdev1 00:08:19.469 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.469 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:19.469 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:19.469 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:19.469 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:19.469 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:19.469 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:19.469 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:19.469 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.469 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.728 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.728 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:19.728 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.728 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.728 [ 00:08:19.728 { 00:08:19.728 "name": "BaseBdev1", 00:08:19.728 "aliases": [ 00:08:19.728 "d043a723-051a-4cbc-a852-f383759af3ec" 00:08:19.728 ], 00:08:19.728 "product_name": "Malloc disk", 00:08:19.728 "block_size": 512, 00:08:19.728 "num_blocks": 65536, 00:08:19.728 "uuid": "d043a723-051a-4cbc-a852-f383759af3ec", 00:08:19.728 "assigned_rate_limits": { 00:08:19.728 "rw_ios_per_sec": 0, 00:08:19.728 "rw_mbytes_per_sec": 0, 00:08:19.728 "r_mbytes_per_sec": 0, 00:08:19.728 "w_mbytes_per_sec": 0 00:08:19.728 }, 00:08:19.728 "claimed": true, 00:08:19.728 "claim_type": "exclusive_write", 00:08:19.728 "zoned": false, 00:08:19.728 "supported_io_types": { 00:08:19.728 "read": true, 00:08:19.728 "write": true, 00:08:19.728 "unmap": true, 00:08:19.728 "flush": true, 00:08:19.728 "reset": true, 00:08:19.728 "nvme_admin": false, 00:08:19.728 "nvme_io": false, 00:08:19.728 "nvme_io_md": false, 00:08:19.728 "write_zeroes": true, 00:08:19.728 "zcopy": true, 00:08:19.728 "get_zone_info": false, 00:08:19.728 "zone_management": false, 00:08:19.728 "zone_append": false, 00:08:19.728 "compare": false, 00:08:19.728 "compare_and_write": false, 00:08:19.728 "abort": true, 00:08:19.728 "seek_hole": false, 00:08:19.728 "seek_data": false, 00:08:19.728 "copy": true, 00:08:19.728 "nvme_iov_md": false 00:08:19.728 }, 00:08:19.728 "memory_domains": [ 00:08:19.728 { 00:08:19.728 "dma_device_id": "system", 00:08:19.728 "dma_device_type": 1 00:08:19.728 }, 00:08:19.728 { 00:08:19.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.728 "dma_device_type": 2 00:08:19.728 } 00:08:19.728 ], 00:08:19.728 "driver_specific": {} 00:08:19.728 } 00:08:19.728 ] 00:08:19.728 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.728 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:19.728 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:19.728 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.728 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.728 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.728 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.728 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.728 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.728 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.728 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.728 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.728 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.728 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.728 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.728 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.728 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.728 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.728 "name": "Existed_Raid", 00:08:19.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.728 "strip_size_kb": 64, 00:08:19.728 "state": "configuring", 00:08:19.728 "raid_level": "raid0", 00:08:19.728 "superblock": false, 00:08:19.728 "num_base_bdevs": 3, 00:08:19.728 "num_base_bdevs_discovered": 1, 00:08:19.728 "num_base_bdevs_operational": 3, 00:08:19.728 "base_bdevs_list": [ 00:08:19.728 { 00:08:19.728 "name": "BaseBdev1", 00:08:19.728 "uuid": "d043a723-051a-4cbc-a852-f383759af3ec", 00:08:19.728 "is_configured": true, 00:08:19.728 "data_offset": 0, 00:08:19.728 "data_size": 65536 00:08:19.728 }, 00:08:19.728 { 00:08:19.728 "name": "BaseBdev2", 00:08:19.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.728 "is_configured": false, 00:08:19.728 "data_offset": 0, 00:08:19.728 "data_size": 0 00:08:19.728 }, 00:08:19.728 { 00:08:19.728 "name": "BaseBdev3", 00:08:19.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.728 "is_configured": false, 00:08:19.728 "data_offset": 0, 00:08:19.728 "data_size": 0 00:08:19.728 } 00:08:19.728 ] 00:08:19.728 }' 00:08:19.729 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.729 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.987 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:19.987 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.987 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.987 [2024-10-15 09:07:37.819666] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:19.987 [2024-10-15 09:07:37.819766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:19.987 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.987 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:19.987 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.987 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.987 [2024-10-15 09:07:37.831741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:19.987 [2024-10-15 09:07:37.833943] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:19.987 [2024-10-15 09:07:37.834004] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:19.987 [2024-10-15 09:07:37.834017] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:19.987 [2024-10-15 09:07:37.834029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:19.987 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.987 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:19.987 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:19.987 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:19.987 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.987 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.987 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.987 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.987 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.987 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.987 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.987 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.987 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.987 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.987 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.987 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.987 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.987 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.987 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.988 "name": "Existed_Raid", 00:08:19.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.988 "strip_size_kb": 64, 00:08:19.988 "state": "configuring", 00:08:19.988 "raid_level": "raid0", 00:08:19.988 "superblock": false, 00:08:19.988 "num_base_bdevs": 3, 00:08:19.988 "num_base_bdevs_discovered": 1, 00:08:19.988 "num_base_bdevs_operational": 3, 00:08:19.988 "base_bdevs_list": [ 00:08:19.988 { 00:08:19.988 "name": "BaseBdev1", 00:08:19.988 "uuid": "d043a723-051a-4cbc-a852-f383759af3ec", 00:08:19.988 "is_configured": true, 00:08:19.988 "data_offset": 0, 00:08:19.988 "data_size": 65536 00:08:19.988 }, 00:08:19.988 { 00:08:19.988 "name": "BaseBdev2", 00:08:19.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.988 "is_configured": false, 00:08:19.988 "data_offset": 0, 00:08:19.988 "data_size": 0 00:08:19.988 }, 00:08:19.988 { 00:08:19.988 "name": "BaseBdev3", 00:08:19.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.988 "is_configured": false, 00:08:19.988 "data_offset": 0, 00:08:19.988 "data_size": 0 00:08:19.988 } 00:08:19.988 ] 00:08:19.988 }' 00:08:20.246 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.246 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.504 [2024-10-15 09:07:38.346400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:20.504 BaseBdev2 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.504 [ 00:08:20.504 { 00:08:20.504 "name": "BaseBdev2", 00:08:20.504 "aliases": [ 00:08:20.504 "c1231c04-cca1-4e44-b7d6-e5ed45fdacfd" 00:08:20.504 ], 00:08:20.504 "product_name": "Malloc disk", 00:08:20.504 "block_size": 512, 00:08:20.504 "num_blocks": 65536, 00:08:20.504 "uuid": "c1231c04-cca1-4e44-b7d6-e5ed45fdacfd", 00:08:20.504 "assigned_rate_limits": { 00:08:20.504 "rw_ios_per_sec": 0, 00:08:20.504 "rw_mbytes_per_sec": 0, 00:08:20.504 "r_mbytes_per_sec": 0, 00:08:20.504 "w_mbytes_per_sec": 0 00:08:20.504 }, 00:08:20.504 "claimed": true, 00:08:20.504 "claim_type": "exclusive_write", 00:08:20.504 "zoned": false, 00:08:20.504 "supported_io_types": { 00:08:20.504 "read": true, 00:08:20.504 "write": true, 00:08:20.504 "unmap": true, 00:08:20.504 "flush": true, 00:08:20.504 "reset": true, 00:08:20.504 "nvme_admin": false, 00:08:20.504 "nvme_io": false, 00:08:20.504 "nvme_io_md": false, 00:08:20.504 "write_zeroes": true, 00:08:20.504 "zcopy": true, 00:08:20.504 "get_zone_info": false, 00:08:20.504 "zone_management": false, 00:08:20.504 "zone_append": false, 00:08:20.504 "compare": false, 00:08:20.504 "compare_and_write": false, 00:08:20.504 "abort": true, 00:08:20.504 "seek_hole": false, 00:08:20.504 "seek_data": false, 00:08:20.504 "copy": true, 00:08:20.504 "nvme_iov_md": false 00:08:20.504 }, 00:08:20.504 "memory_domains": [ 00:08:20.504 { 00:08:20.504 "dma_device_id": "system", 00:08:20.504 "dma_device_type": 1 00:08:20.504 }, 00:08:20.504 { 00:08:20.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.504 "dma_device_type": 2 00:08:20.504 } 00:08:20.504 ], 00:08:20.504 "driver_specific": {} 00:08:20.504 } 00:08:20.504 ] 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.504 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.763 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.763 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.763 "name": "Existed_Raid", 00:08:20.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.763 "strip_size_kb": 64, 00:08:20.763 "state": "configuring", 00:08:20.763 "raid_level": "raid0", 00:08:20.763 "superblock": false, 00:08:20.763 "num_base_bdevs": 3, 00:08:20.763 "num_base_bdevs_discovered": 2, 00:08:20.763 "num_base_bdevs_operational": 3, 00:08:20.763 "base_bdevs_list": [ 00:08:20.763 { 00:08:20.763 "name": "BaseBdev1", 00:08:20.763 "uuid": "d043a723-051a-4cbc-a852-f383759af3ec", 00:08:20.763 "is_configured": true, 00:08:20.763 "data_offset": 0, 00:08:20.763 "data_size": 65536 00:08:20.763 }, 00:08:20.763 { 00:08:20.763 "name": "BaseBdev2", 00:08:20.763 "uuid": "c1231c04-cca1-4e44-b7d6-e5ed45fdacfd", 00:08:20.763 "is_configured": true, 00:08:20.763 "data_offset": 0, 00:08:20.763 "data_size": 65536 00:08:20.763 }, 00:08:20.763 { 00:08:20.763 "name": "BaseBdev3", 00:08:20.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.763 "is_configured": false, 00:08:20.763 "data_offset": 0, 00:08:20.763 "data_size": 0 00:08:20.763 } 00:08:20.763 ] 00:08:20.763 }' 00:08:20.763 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.763 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.021 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:21.021 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.021 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.280 [2024-10-15 09:07:38.922430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:21.280 [2024-10-15 09:07:38.922492] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:21.280 [2024-10-15 09:07:38.922507] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:21.280 [2024-10-15 09:07:38.922859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:21.280 [2024-10-15 09:07:38.923064] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:21.280 [2024-10-15 09:07:38.923078] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:21.280 [2024-10-15 09:07:38.923399] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.280 BaseBdev3 00:08:21.280 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.280 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:21.280 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:21.280 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:21.280 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:21.280 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:21.280 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:21.280 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:21.280 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.280 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.280 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.280 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:21.280 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.280 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.280 [ 00:08:21.280 { 00:08:21.280 "name": "BaseBdev3", 00:08:21.280 "aliases": [ 00:08:21.280 "f45faae9-4fe5-4612-b5b3-6ab66a708bdb" 00:08:21.280 ], 00:08:21.280 "product_name": "Malloc disk", 00:08:21.280 "block_size": 512, 00:08:21.280 "num_blocks": 65536, 00:08:21.280 "uuid": "f45faae9-4fe5-4612-b5b3-6ab66a708bdb", 00:08:21.280 "assigned_rate_limits": { 00:08:21.280 "rw_ios_per_sec": 0, 00:08:21.280 "rw_mbytes_per_sec": 0, 00:08:21.280 "r_mbytes_per_sec": 0, 00:08:21.280 "w_mbytes_per_sec": 0 00:08:21.280 }, 00:08:21.280 "claimed": true, 00:08:21.280 "claim_type": "exclusive_write", 00:08:21.280 "zoned": false, 00:08:21.280 "supported_io_types": { 00:08:21.280 "read": true, 00:08:21.280 "write": true, 00:08:21.280 "unmap": true, 00:08:21.280 "flush": true, 00:08:21.280 "reset": true, 00:08:21.280 "nvme_admin": false, 00:08:21.280 "nvme_io": false, 00:08:21.280 "nvme_io_md": false, 00:08:21.280 "write_zeroes": true, 00:08:21.280 "zcopy": true, 00:08:21.280 "get_zone_info": false, 00:08:21.280 "zone_management": false, 00:08:21.280 "zone_append": false, 00:08:21.280 "compare": false, 00:08:21.280 "compare_and_write": false, 00:08:21.280 "abort": true, 00:08:21.280 "seek_hole": false, 00:08:21.280 "seek_data": false, 00:08:21.280 "copy": true, 00:08:21.280 "nvme_iov_md": false 00:08:21.280 }, 00:08:21.280 "memory_domains": [ 00:08:21.280 { 00:08:21.280 "dma_device_id": "system", 00:08:21.280 "dma_device_type": 1 00:08:21.280 }, 00:08:21.280 { 00:08:21.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.280 "dma_device_type": 2 00:08:21.280 } 00:08:21.280 ], 00:08:21.280 "driver_specific": {} 00:08:21.280 } 00:08:21.280 ] 00:08:21.280 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.280 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:21.280 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:21.280 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:21.280 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:21.280 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.280 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.280 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.280 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.280 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.280 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.280 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.280 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.280 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.280 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.280 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.280 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.280 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.280 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.280 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.280 "name": "Existed_Raid", 00:08:21.280 "uuid": "cc01b762-b209-4c0b-ad81-73b797d1e2fa", 00:08:21.280 "strip_size_kb": 64, 00:08:21.280 "state": "online", 00:08:21.281 "raid_level": "raid0", 00:08:21.281 "superblock": false, 00:08:21.281 "num_base_bdevs": 3, 00:08:21.281 "num_base_bdevs_discovered": 3, 00:08:21.281 "num_base_bdevs_operational": 3, 00:08:21.281 "base_bdevs_list": [ 00:08:21.281 { 00:08:21.281 "name": "BaseBdev1", 00:08:21.281 "uuid": "d043a723-051a-4cbc-a852-f383759af3ec", 00:08:21.281 "is_configured": true, 00:08:21.281 "data_offset": 0, 00:08:21.281 "data_size": 65536 00:08:21.281 }, 00:08:21.281 { 00:08:21.281 "name": "BaseBdev2", 00:08:21.281 "uuid": "c1231c04-cca1-4e44-b7d6-e5ed45fdacfd", 00:08:21.281 "is_configured": true, 00:08:21.281 "data_offset": 0, 00:08:21.281 "data_size": 65536 00:08:21.281 }, 00:08:21.281 { 00:08:21.281 "name": "BaseBdev3", 00:08:21.281 "uuid": "f45faae9-4fe5-4612-b5b3-6ab66a708bdb", 00:08:21.281 "is_configured": true, 00:08:21.281 "data_offset": 0, 00:08:21.281 "data_size": 65536 00:08:21.281 } 00:08:21.281 ] 00:08:21.281 }' 00:08:21.281 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.281 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.848 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:21.848 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:21.848 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:21.848 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:21.848 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:21.848 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:21.848 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:21.848 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:21.848 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.848 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.848 [2024-10-15 09:07:39.454046] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:21.848 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.848 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:21.848 "name": "Existed_Raid", 00:08:21.848 "aliases": [ 00:08:21.848 "cc01b762-b209-4c0b-ad81-73b797d1e2fa" 00:08:21.848 ], 00:08:21.848 "product_name": "Raid Volume", 00:08:21.848 "block_size": 512, 00:08:21.848 "num_blocks": 196608, 00:08:21.848 "uuid": "cc01b762-b209-4c0b-ad81-73b797d1e2fa", 00:08:21.848 "assigned_rate_limits": { 00:08:21.848 "rw_ios_per_sec": 0, 00:08:21.848 "rw_mbytes_per_sec": 0, 00:08:21.848 "r_mbytes_per_sec": 0, 00:08:21.848 "w_mbytes_per_sec": 0 00:08:21.848 }, 00:08:21.848 "claimed": false, 00:08:21.848 "zoned": false, 00:08:21.848 "supported_io_types": { 00:08:21.848 "read": true, 00:08:21.848 "write": true, 00:08:21.848 "unmap": true, 00:08:21.848 "flush": true, 00:08:21.848 "reset": true, 00:08:21.848 "nvme_admin": false, 00:08:21.848 "nvme_io": false, 00:08:21.848 "nvme_io_md": false, 00:08:21.848 "write_zeroes": true, 00:08:21.848 "zcopy": false, 00:08:21.848 "get_zone_info": false, 00:08:21.848 "zone_management": false, 00:08:21.848 "zone_append": false, 00:08:21.848 "compare": false, 00:08:21.848 "compare_and_write": false, 00:08:21.848 "abort": false, 00:08:21.848 "seek_hole": false, 00:08:21.848 "seek_data": false, 00:08:21.848 "copy": false, 00:08:21.848 "nvme_iov_md": false 00:08:21.848 }, 00:08:21.848 "memory_domains": [ 00:08:21.848 { 00:08:21.848 "dma_device_id": "system", 00:08:21.848 "dma_device_type": 1 00:08:21.848 }, 00:08:21.848 { 00:08:21.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.848 "dma_device_type": 2 00:08:21.848 }, 00:08:21.848 { 00:08:21.848 "dma_device_id": "system", 00:08:21.848 "dma_device_type": 1 00:08:21.848 }, 00:08:21.848 { 00:08:21.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.848 "dma_device_type": 2 00:08:21.848 }, 00:08:21.848 { 00:08:21.848 "dma_device_id": "system", 00:08:21.848 "dma_device_type": 1 00:08:21.848 }, 00:08:21.848 { 00:08:21.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.848 "dma_device_type": 2 00:08:21.848 } 00:08:21.848 ], 00:08:21.848 "driver_specific": { 00:08:21.848 "raid": { 00:08:21.848 "uuid": "cc01b762-b209-4c0b-ad81-73b797d1e2fa", 00:08:21.848 "strip_size_kb": 64, 00:08:21.848 "state": "online", 00:08:21.848 "raid_level": "raid0", 00:08:21.848 "superblock": false, 00:08:21.848 "num_base_bdevs": 3, 00:08:21.848 "num_base_bdevs_discovered": 3, 00:08:21.848 "num_base_bdevs_operational": 3, 00:08:21.848 "base_bdevs_list": [ 00:08:21.848 { 00:08:21.848 "name": "BaseBdev1", 00:08:21.848 "uuid": "d043a723-051a-4cbc-a852-f383759af3ec", 00:08:21.848 "is_configured": true, 00:08:21.848 "data_offset": 0, 00:08:21.848 "data_size": 65536 00:08:21.848 }, 00:08:21.848 { 00:08:21.848 "name": "BaseBdev2", 00:08:21.848 "uuid": "c1231c04-cca1-4e44-b7d6-e5ed45fdacfd", 00:08:21.848 "is_configured": true, 00:08:21.848 "data_offset": 0, 00:08:21.848 "data_size": 65536 00:08:21.848 }, 00:08:21.848 { 00:08:21.848 "name": "BaseBdev3", 00:08:21.848 "uuid": "f45faae9-4fe5-4612-b5b3-6ab66a708bdb", 00:08:21.848 "is_configured": true, 00:08:21.848 "data_offset": 0, 00:08:21.848 "data_size": 65536 00:08:21.848 } 00:08:21.848 ] 00:08:21.848 } 00:08:21.848 } 00:08:21.848 }' 00:08:21.849 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:21.849 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:21.849 BaseBdev2 00:08:21.849 BaseBdev3' 00:08:21.849 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.849 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:21.849 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.849 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:21.849 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.849 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.849 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.849 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.849 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.849 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.849 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.849 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:21.849 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.849 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.849 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.849 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.849 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.849 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.849 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.849 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:21.849 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.849 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.849 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.849 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.849 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.849 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.849 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:21.849 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.849 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.109 [2024-10-15 09:07:39.745315] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:22.109 [2024-10-15 09:07:39.745350] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:22.109 [2024-10-15 09:07:39.745413] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:22.109 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.109 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:22.109 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:22.109 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:22.109 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:22.109 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:22.109 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:22.109 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.109 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:22.109 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.109 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.109 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.109 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.109 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.109 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.109 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.109 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.109 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.109 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.109 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.109 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.109 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.109 "name": "Existed_Raid", 00:08:22.109 "uuid": "cc01b762-b209-4c0b-ad81-73b797d1e2fa", 00:08:22.109 "strip_size_kb": 64, 00:08:22.109 "state": "offline", 00:08:22.109 "raid_level": "raid0", 00:08:22.109 "superblock": false, 00:08:22.109 "num_base_bdevs": 3, 00:08:22.109 "num_base_bdevs_discovered": 2, 00:08:22.109 "num_base_bdevs_operational": 2, 00:08:22.109 "base_bdevs_list": [ 00:08:22.109 { 00:08:22.109 "name": null, 00:08:22.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.109 "is_configured": false, 00:08:22.109 "data_offset": 0, 00:08:22.109 "data_size": 65536 00:08:22.109 }, 00:08:22.109 { 00:08:22.109 "name": "BaseBdev2", 00:08:22.109 "uuid": "c1231c04-cca1-4e44-b7d6-e5ed45fdacfd", 00:08:22.109 "is_configured": true, 00:08:22.109 "data_offset": 0, 00:08:22.109 "data_size": 65536 00:08:22.109 }, 00:08:22.109 { 00:08:22.109 "name": "BaseBdev3", 00:08:22.109 "uuid": "f45faae9-4fe5-4612-b5b3-6ab66a708bdb", 00:08:22.109 "is_configured": true, 00:08:22.109 "data_offset": 0, 00:08:22.109 "data_size": 65536 00:08:22.109 } 00:08:22.109 ] 00:08:22.109 }' 00:08:22.109 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.109 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.677 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:22.677 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:22.677 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.677 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:22.677 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.677 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.677 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.677 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:22.677 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:22.677 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:22.677 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.677 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.677 [2024-10-15 09:07:40.383493] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:22.677 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.677 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:22.677 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:22.677 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.677 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.677 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.677 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:22.677 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.677 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:22.677 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:22.677 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:22.677 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.677 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.677 [2024-10-15 09:07:40.554784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:22.677 [2024-10-15 09:07:40.554865] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:22.937 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.937 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:22.937 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:22.937 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.937 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.937 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:22.937 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.937 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.937 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:22.937 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:22.937 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:22.937 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:22.937 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:22.937 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:22.937 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.937 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.937 BaseBdev2 00:08:22.937 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.937 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:22.937 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:22.937 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:22.937 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:22.937 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:22.938 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:22.938 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:22.938 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.938 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.938 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.938 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:22.938 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.938 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.938 [ 00:08:22.938 { 00:08:22.938 "name": "BaseBdev2", 00:08:22.938 "aliases": [ 00:08:22.938 "547e6058-a362-447a-b3d1-dc00f08343e2" 00:08:22.938 ], 00:08:22.938 "product_name": "Malloc disk", 00:08:22.938 "block_size": 512, 00:08:22.938 "num_blocks": 65536, 00:08:22.938 "uuid": "547e6058-a362-447a-b3d1-dc00f08343e2", 00:08:22.938 "assigned_rate_limits": { 00:08:22.938 "rw_ios_per_sec": 0, 00:08:22.938 "rw_mbytes_per_sec": 0, 00:08:22.938 "r_mbytes_per_sec": 0, 00:08:22.938 "w_mbytes_per_sec": 0 00:08:22.938 }, 00:08:22.938 "claimed": false, 00:08:22.938 "zoned": false, 00:08:22.938 "supported_io_types": { 00:08:22.938 "read": true, 00:08:22.938 "write": true, 00:08:22.938 "unmap": true, 00:08:22.938 "flush": true, 00:08:22.938 "reset": true, 00:08:22.938 "nvme_admin": false, 00:08:22.938 "nvme_io": false, 00:08:22.938 "nvme_io_md": false, 00:08:22.938 "write_zeroes": true, 00:08:22.938 "zcopy": true, 00:08:22.938 "get_zone_info": false, 00:08:22.938 "zone_management": false, 00:08:22.938 "zone_append": false, 00:08:22.938 "compare": false, 00:08:22.938 "compare_and_write": false, 00:08:22.938 "abort": true, 00:08:22.938 "seek_hole": false, 00:08:22.938 "seek_data": false, 00:08:22.938 "copy": true, 00:08:22.938 "nvme_iov_md": false 00:08:22.938 }, 00:08:22.938 "memory_domains": [ 00:08:22.938 { 00:08:22.938 "dma_device_id": "system", 00:08:22.938 "dma_device_type": 1 00:08:22.938 }, 00:08:22.938 { 00:08:22.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.938 "dma_device_type": 2 00:08:22.938 } 00:08:22.938 ], 00:08:22.938 "driver_specific": {} 00:08:22.938 } 00:08:22.938 ] 00:08:22.938 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.938 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:22.938 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:22.938 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:22.938 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:22.938 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.938 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.198 BaseBdev3 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.198 [ 00:08:23.198 { 00:08:23.198 "name": "BaseBdev3", 00:08:23.198 "aliases": [ 00:08:23.198 "a2a97e48-fe83-4448-96e1-14196b91539a" 00:08:23.198 ], 00:08:23.198 "product_name": "Malloc disk", 00:08:23.198 "block_size": 512, 00:08:23.198 "num_blocks": 65536, 00:08:23.198 "uuid": "a2a97e48-fe83-4448-96e1-14196b91539a", 00:08:23.198 "assigned_rate_limits": { 00:08:23.198 "rw_ios_per_sec": 0, 00:08:23.198 "rw_mbytes_per_sec": 0, 00:08:23.198 "r_mbytes_per_sec": 0, 00:08:23.198 "w_mbytes_per_sec": 0 00:08:23.198 }, 00:08:23.198 "claimed": false, 00:08:23.198 "zoned": false, 00:08:23.198 "supported_io_types": { 00:08:23.198 "read": true, 00:08:23.198 "write": true, 00:08:23.198 "unmap": true, 00:08:23.198 "flush": true, 00:08:23.198 "reset": true, 00:08:23.198 "nvme_admin": false, 00:08:23.198 "nvme_io": false, 00:08:23.198 "nvme_io_md": false, 00:08:23.198 "write_zeroes": true, 00:08:23.198 "zcopy": true, 00:08:23.198 "get_zone_info": false, 00:08:23.198 "zone_management": false, 00:08:23.198 "zone_append": false, 00:08:23.198 "compare": false, 00:08:23.198 "compare_and_write": false, 00:08:23.198 "abort": true, 00:08:23.198 "seek_hole": false, 00:08:23.198 "seek_data": false, 00:08:23.198 "copy": true, 00:08:23.198 "nvme_iov_md": false 00:08:23.198 }, 00:08:23.198 "memory_domains": [ 00:08:23.198 { 00:08:23.198 "dma_device_id": "system", 00:08:23.198 "dma_device_type": 1 00:08:23.198 }, 00:08:23.198 { 00:08:23.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.198 "dma_device_type": 2 00:08:23.198 } 00:08:23.198 ], 00:08:23.198 "driver_specific": {} 00:08:23.198 } 00:08:23.198 ] 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.198 [2024-10-15 09:07:40.897935] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:23.198 [2024-10-15 09:07:40.898087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:23.198 [2024-10-15 09:07:40.898149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:23.198 [2024-10-15 09:07:40.900294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.198 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.199 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.199 "name": "Existed_Raid", 00:08:23.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.199 "strip_size_kb": 64, 00:08:23.199 "state": "configuring", 00:08:23.199 "raid_level": "raid0", 00:08:23.199 "superblock": false, 00:08:23.199 "num_base_bdevs": 3, 00:08:23.199 "num_base_bdevs_discovered": 2, 00:08:23.199 "num_base_bdevs_operational": 3, 00:08:23.199 "base_bdevs_list": [ 00:08:23.199 { 00:08:23.199 "name": "BaseBdev1", 00:08:23.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.199 "is_configured": false, 00:08:23.199 "data_offset": 0, 00:08:23.199 "data_size": 0 00:08:23.199 }, 00:08:23.199 { 00:08:23.199 "name": "BaseBdev2", 00:08:23.199 "uuid": "547e6058-a362-447a-b3d1-dc00f08343e2", 00:08:23.199 "is_configured": true, 00:08:23.199 "data_offset": 0, 00:08:23.199 "data_size": 65536 00:08:23.199 }, 00:08:23.199 { 00:08:23.199 "name": "BaseBdev3", 00:08:23.199 "uuid": "a2a97e48-fe83-4448-96e1-14196b91539a", 00:08:23.199 "is_configured": true, 00:08:23.199 "data_offset": 0, 00:08:23.199 "data_size": 65536 00:08:23.199 } 00:08:23.199 ] 00:08:23.199 }' 00:08:23.199 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.199 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.458 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:23.458 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.458 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.458 [2024-10-15 09:07:41.321183] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:23.458 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.458 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:23.458 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.458 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.458 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.458 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.458 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.458 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.458 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.458 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.458 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.458 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.458 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.458 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.458 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.458 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.716 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.716 "name": "Existed_Raid", 00:08:23.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.716 "strip_size_kb": 64, 00:08:23.716 "state": "configuring", 00:08:23.716 "raid_level": "raid0", 00:08:23.716 "superblock": false, 00:08:23.716 "num_base_bdevs": 3, 00:08:23.716 "num_base_bdevs_discovered": 1, 00:08:23.716 "num_base_bdevs_operational": 3, 00:08:23.716 "base_bdevs_list": [ 00:08:23.716 { 00:08:23.716 "name": "BaseBdev1", 00:08:23.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.716 "is_configured": false, 00:08:23.716 "data_offset": 0, 00:08:23.716 "data_size": 0 00:08:23.716 }, 00:08:23.716 { 00:08:23.716 "name": null, 00:08:23.716 "uuid": "547e6058-a362-447a-b3d1-dc00f08343e2", 00:08:23.716 "is_configured": false, 00:08:23.716 "data_offset": 0, 00:08:23.716 "data_size": 65536 00:08:23.716 }, 00:08:23.716 { 00:08:23.716 "name": "BaseBdev3", 00:08:23.716 "uuid": "a2a97e48-fe83-4448-96e1-14196b91539a", 00:08:23.717 "is_configured": true, 00:08:23.717 "data_offset": 0, 00:08:23.717 "data_size": 65536 00:08:23.717 } 00:08:23.717 ] 00:08:23.717 }' 00:08:23.717 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.717 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.975 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:23.975 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.975 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.975 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.975 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.975 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:23.975 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:23.975 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.975 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.316 [2024-10-15 09:07:41.902005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:24.316 BaseBdev1 00:08:24.316 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.316 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:24.316 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:24.316 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:24.316 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:24.316 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:24.316 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:24.316 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:24.316 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.316 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.316 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.316 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:24.316 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.316 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.316 [ 00:08:24.316 { 00:08:24.316 "name": "BaseBdev1", 00:08:24.316 "aliases": [ 00:08:24.316 "96d2311a-93d7-413a-ba25-013fffbef278" 00:08:24.316 ], 00:08:24.316 "product_name": "Malloc disk", 00:08:24.317 "block_size": 512, 00:08:24.317 "num_blocks": 65536, 00:08:24.317 "uuid": "96d2311a-93d7-413a-ba25-013fffbef278", 00:08:24.317 "assigned_rate_limits": { 00:08:24.317 "rw_ios_per_sec": 0, 00:08:24.317 "rw_mbytes_per_sec": 0, 00:08:24.317 "r_mbytes_per_sec": 0, 00:08:24.317 "w_mbytes_per_sec": 0 00:08:24.317 }, 00:08:24.317 "claimed": true, 00:08:24.317 "claim_type": "exclusive_write", 00:08:24.317 "zoned": false, 00:08:24.317 "supported_io_types": { 00:08:24.317 "read": true, 00:08:24.317 "write": true, 00:08:24.317 "unmap": true, 00:08:24.317 "flush": true, 00:08:24.317 "reset": true, 00:08:24.317 "nvme_admin": false, 00:08:24.317 "nvme_io": false, 00:08:24.317 "nvme_io_md": false, 00:08:24.317 "write_zeroes": true, 00:08:24.317 "zcopy": true, 00:08:24.317 "get_zone_info": false, 00:08:24.317 "zone_management": false, 00:08:24.317 "zone_append": false, 00:08:24.317 "compare": false, 00:08:24.317 "compare_and_write": false, 00:08:24.317 "abort": true, 00:08:24.317 "seek_hole": false, 00:08:24.317 "seek_data": false, 00:08:24.317 "copy": true, 00:08:24.317 "nvme_iov_md": false 00:08:24.317 }, 00:08:24.317 "memory_domains": [ 00:08:24.317 { 00:08:24.317 "dma_device_id": "system", 00:08:24.317 "dma_device_type": 1 00:08:24.317 }, 00:08:24.317 { 00:08:24.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.317 "dma_device_type": 2 00:08:24.317 } 00:08:24.317 ], 00:08:24.317 "driver_specific": {} 00:08:24.317 } 00:08:24.317 ] 00:08:24.317 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.317 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:24.317 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:24.317 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.317 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.317 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.317 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.317 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.317 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.317 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.317 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.317 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.317 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.317 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.317 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.317 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.317 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.317 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.317 "name": "Existed_Raid", 00:08:24.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.317 "strip_size_kb": 64, 00:08:24.317 "state": "configuring", 00:08:24.317 "raid_level": "raid0", 00:08:24.317 "superblock": false, 00:08:24.317 "num_base_bdevs": 3, 00:08:24.317 "num_base_bdevs_discovered": 2, 00:08:24.317 "num_base_bdevs_operational": 3, 00:08:24.317 "base_bdevs_list": [ 00:08:24.317 { 00:08:24.317 "name": "BaseBdev1", 00:08:24.317 "uuid": "96d2311a-93d7-413a-ba25-013fffbef278", 00:08:24.317 "is_configured": true, 00:08:24.317 "data_offset": 0, 00:08:24.317 "data_size": 65536 00:08:24.317 }, 00:08:24.317 { 00:08:24.317 "name": null, 00:08:24.317 "uuid": "547e6058-a362-447a-b3d1-dc00f08343e2", 00:08:24.317 "is_configured": false, 00:08:24.317 "data_offset": 0, 00:08:24.317 "data_size": 65536 00:08:24.317 }, 00:08:24.317 { 00:08:24.317 "name": "BaseBdev3", 00:08:24.317 "uuid": "a2a97e48-fe83-4448-96e1-14196b91539a", 00:08:24.317 "is_configured": true, 00:08:24.317 "data_offset": 0, 00:08:24.317 "data_size": 65536 00:08:24.317 } 00:08:24.317 ] 00:08:24.317 }' 00:08:24.317 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.317 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.588 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.588 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:24.588 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.588 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.588 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.588 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:24.588 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:24.588 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.588 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.588 [2024-10-15 09:07:42.461207] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:24.588 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.588 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:24.588 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.588 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.588 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.588 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.588 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.588 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.588 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.588 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.588 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.588 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.588 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.588 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.588 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.846 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.846 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.846 "name": "Existed_Raid", 00:08:24.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.846 "strip_size_kb": 64, 00:08:24.846 "state": "configuring", 00:08:24.846 "raid_level": "raid0", 00:08:24.846 "superblock": false, 00:08:24.846 "num_base_bdevs": 3, 00:08:24.846 "num_base_bdevs_discovered": 1, 00:08:24.846 "num_base_bdevs_operational": 3, 00:08:24.846 "base_bdevs_list": [ 00:08:24.846 { 00:08:24.846 "name": "BaseBdev1", 00:08:24.847 "uuid": "96d2311a-93d7-413a-ba25-013fffbef278", 00:08:24.847 "is_configured": true, 00:08:24.847 "data_offset": 0, 00:08:24.847 "data_size": 65536 00:08:24.847 }, 00:08:24.847 { 00:08:24.847 "name": null, 00:08:24.847 "uuid": "547e6058-a362-447a-b3d1-dc00f08343e2", 00:08:24.847 "is_configured": false, 00:08:24.847 "data_offset": 0, 00:08:24.847 "data_size": 65536 00:08:24.847 }, 00:08:24.847 { 00:08:24.847 "name": null, 00:08:24.847 "uuid": "a2a97e48-fe83-4448-96e1-14196b91539a", 00:08:24.847 "is_configured": false, 00:08:24.847 "data_offset": 0, 00:08:24.847 "data_size": 65536 00:08:24.847 } 00:08:24.847 ] 00:08:24.847 }' 00:08:24.847 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.847 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.105 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.105 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.105 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.105 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:25.105 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.105 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:25.105 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:25.105 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.105 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.105 [2024-10-15 09:07:42.964391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:25.105 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.105 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:25.105 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.105 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.105 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.105 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.105 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.105 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.105 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.105 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.105 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.105 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.105 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.105 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.105 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.105 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.364 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.364 "name": "Existed_Raid", 00:08:25.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.364 "strip_size_kb": 64, 00:08:25.364 "state": "configuring", 00:08:25.364 "raid_level": "raid0", 00:08:25.364 "superblock": false, 00:08:25.364 "num_base_bdevs": 3, 00:08:25.364 "num_base_bdevs_discovered": 2, 00:08:25.364 "num_base_bdevs_operational": 3, 00:08:25.364 "base_bdevs_list": [ 00:08:25.364 { 00:08:25.364 "name": "BaseBdev1", 00:08:25.364 "uuid": "96d2311a-93d7-413a-ba25-013fffbef278", 00:08:25.364 "is_configured": true, 00:08:25.364 "data_offset": 0, 00:08:25.364 "data_size": 65536 00:08:25.364 }, 00:08:25.364 { 00:08:25.365 "name": null, 00:08:25.365 "uuid": "547e6058-a362-447a-b3d1-dc00f08343e2", 00:08:25.365 "is_configured": false, 00:08:25.365 "data_offset": 0, 00:08:25.365 "data_size": 65536 00:08:25.365 }, 00:08:25.365 { 00:08:25.365 "name": "BaseBdev3", 00:08:25.365 "uuid": "a2a97e48-fe83-4448-96e1-14196b91539a", 00:08:25.365 "is_configured": true, 00:08:25.365 "data_offset": 0, 00:08:25.365 "data_size": 65536 00:08:25.365 } 00:08:25.365 ] 00:08:25.365 }' 00:08:25.365 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.365 09:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.625 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:25.625 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.625 09:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.625 09:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.625 09:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.625 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:25.625 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:25.625 09:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.625 09:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.625 [2024-10-15 09:07:43.487489] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:25.885 09:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.885 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:25.885 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.885 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.885 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.885 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.885 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.885 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.885 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.885 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.885 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.885 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.885 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.885 09:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.885 09:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.885 09:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.885 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.886 "name": "Existed_Raid", 00:08:25.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.886 "strip_size_kb": 64, 00:08:25.886 "state": "configuring", 00:08:25.886 "raid_level": "raid0", 00:08:25.886 "superblock": false, 00:08:25.886 "num_base_bdevs": 3, 00:08:25.886 "num_base_bdevs_discovered": 1, 00:08:25.886 "num_base_bdevs_operational": 3, 00:08:25.886 "base_bdevs_list": [ 00:08:25.886 { 00:08:25.886 "name": null, 00:08:25.886 "uuid": "96d2311a-93d7-413a-ba25-013fffbef278", 00:08:25.886 "is_configured": false, 00:08:25.886 "data_offset": 0, 00:08:25.886 "data_size": 65536 00:08:25.886 }, 00:08:25.886 { 00:08:25.886 "name": null, 00:08:25.886 "uuid": "547e6058-a362-447a-b3d1-dc00f08343e2", 00:08:25.886 "is_configured": false, 00:08:25.886 "data_offset": 0, 00:08:25.886 "data_size": 65536 00:08:25.886 }, 00:08:25.886 { 00:08:25.886 "name": "BaseBdev3", 00:08:25.886 "uuid": "a2a97e48-fe83-4448-96e1-14196b91539a", 00:08:25.886 "is_configured": true, 00:08:25.886 "data_offset": 0, 00:08:25.886 "data_size": 65536 00:08:25.886 } 00:08:25.886 ] 00:08:25.886 }' 00:08:25.886 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.886 09:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.451 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.451 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:26.451 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.451 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.451 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.451 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:26.451 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:26.451 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.451 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.452 [2024-10-15 09:07:44.127342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:26.452 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.452 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:26.452 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.452 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.452 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.452 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.452 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.452 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.452 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.452 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.452 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.452 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.452 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.452 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.452 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.452 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.452 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.452 "name": "Existed_Raid", 00:08:26.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.452 "strip_size_kb": 64, 00:08:26.452 "state": "configuring", 00:08:26.452 "raid_level": "raid0", 00:08:26.452 "superblock": false, 00:08:26.452 "num_base_bdevs": 3, 00:08:26.452 "num_base_bdevs_discovered": 2, 00:08:26.452 "num_base_bdevs_operational": 3, 00:08:26.452 "base_bdevs_list": [ 00:08:26.452 { 00:08:26.452 "name": null, 00:08:26.452 "uuid": "96d2311a-93d7-413a-ba25-013fffbef278", 00:08:26.452 "is_configured": false, 00:08:26.452 "data_offset": 0, 00:08:26.452 "data_size": 65536 00:08:26.452 }, 00:08:26.452 { 00:08:26.452 "name": "BaseBdev2", 00:08:26.452 "uuid": "547e6058-a362-447a-b3d1-dc00f08343e2", 00:08:26.452 "is_configured": true, 00:08:26.452 "data_offset": 0, 00:08:26.452 "data_size": 65536 00:08:26.452 }, 00:08:26.452 { 00:08:26.452 "name": "BaseBdev3", 00:08:26.452 "uuid": "a2a97e48-fe83-4448-96e1-14196b91539a", 00:08:26.452 "is_configured": true, 00:08:26.452 "data_offset": 0, 00:08:26.452 "data_size": 65536 00:08:26.452 } 00:08:26.452 ] 00:08:26.452 }' 00:08:26.452 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.452 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.709 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.709 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:26.709 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.709 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.967 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.967 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:26.967 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:26.967 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.967 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.967 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.967 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.967 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 96d2311a-93d7-413a-ba25-013fffbef278 00:08:26.967 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.967 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.967 [2024-10-15 09:07:44.703263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:26.967 [2024-10-15 09:07:44.703308] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:26.967 [2024-10-15 09:07:44.703318] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:26.967 [2024-10-15 09:07:44.703590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:26.967 [2024-10-15 09:07:44.703800] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:26.967 [2024-10-15 09:07:44.703825] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:26.967 [2024-10-15 09:07:44.704147] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.967 NewBaseBdev 00:08:26.967 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.967 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:26.967 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:26.967 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:26.967 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:26.967 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:26.967 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:26.967 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:26.967 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.967 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.967 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.967 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:26.967 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.967 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.967 [ 00:08:26.967 { 00:08:26.967 "name": "NewBaseBdev", 00:08:26.967 "aliases": [ 00:08:26.967 "96d2311a-93d7-413a-ba25-013fffbef278" 00:08:26.967 ], 00:08:26.967 "product_name": "Malloc disk", 00:08:26.967 "block_size": 512, 00:08:26.967 "num_blocks": 65536, 00:08:26.967 "uuid": "96d2311a-93d7-413a-ba25-013fffbef278", 00:08:26.967 "assigned_rate_limits": { 00:08:26.967 "rw_ios_per_sec": 0, 00:08:26.967 "rw_mbytes_per_sec": 0, 00:08:26.967 "r_mbytes_per_sec": 0, 00:08:26.967 "w_mbytes_per_sec": 0 00:08:26.967 }, 00:08:26.967 "claimed": true, 00:08:26.967 "claim_type": "exclusive_write", 00:08:26.967 "zoned": false, 00:08:26.967 "supported_io_types": { 00:08:26.967 "read": true, 00:08:26.967 "write": true, 00:08:26.967 "unmap": true, 00:08:26.967 "flush": true, 00:08:26.967 "reset": true, 00:08:26.967 "nvme_admin": false, 00:08:26.967 "nvme_io": false, 00:08:26.967 "nvme_io_md": false, 00:08:26.967 "write_zeroes": true, 00:08:26.967 "zcopy": true, 00:08:26.967 "get_zone_info": false, 00:08:26.967 "zone_management": false, 00:08:26.967 "zone_append": false, 00:08:26.967 "compare": false, 00:08:26.967 "compare_and_write": false, 00:08:26.967 "abort": true, 00:08:26.967 "seek_hole": false, 00:08:26.967 "seek_data": false, 00:08:26.967 "copy": true, 00:08:26.967 "nvme_iov_md": false 00:08:26.967 }, 00:08:26.967 "memory_domains": [ 00:08:26.967 { 00:08:26.967 "dma_device_id": "system", 00:08:26.967 "dma_device_type": 1 00:08:26.967 }, 00:08:26.967 { 00:08:26.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.967 "dma_device_type": 2 00:08:26.967 } 00:08:26.967 ], 00:08:26.967 "driver_specific": {} 00:08:26.967 } 00:08:26.967 ] 00:08:26.968 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.968 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:26.968 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:26.968 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.968 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:26.968 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.968 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.968 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.968 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.968 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.968 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.968 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.968 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.968 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.968 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.968 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.968 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.968 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.968 "name": "Existed_Raid", 00:08:26.968 "uuid": "80d5c51a-180e-4fd3-8992-74b977929e0c", 00:08:26.968 "strip_size_kb": 64, 00:08:26.968 "state": "online", 00:08:26.968 "raid_level": "raid0", 00:08:26.968 "superblock": false, 00:08:26.968 "num_base_bdevs": 3, 00:08:26.968 "num_base_bdevs_discovered": 3, 00:08:26.968 "num_base_bdevs_operational": 3, 00:08:26.968 "base_bdevs_list": [ 00:08:26.968 { 00:08:26.968 "name": "NewBaseBdev", 00:08:26.968 "uuid": "96d2311a-93d7-413a-ba25-013fffbef278", 00:08:26.968 "is_configured": true, 00:08:26.968 "data_offset": 0, 00:08:26.968 "data_size": 65536 00:08:26.968 }, 00:08:26.968 { 00:08:26.968 "name": "BaseBdev2", 00:08:26.968 "uuid": "547e6058-a362-447a-b3d1-dc00f08343e2", 00:08:26.968 "is_configured": true, 00:08:26.968 "data_offset": 0, 00:08:26.968 "data_size": 65536 00:08:26.968 }, 00:08:26.968 { 00:08:26.968 "name": "BaseBdev3", 00:08:26.968 "uuid": "a2a97e48-fe83-4448-96e1-14196b91539a", 00:08:26.968 "is_configured": true, 00:08:26.968 "data_offset": 0, 00:08:26.968 "data_size": 65536 00:08:26.968 } 00:08:26.968 ] 00:08:26.968 }' 00:08:26.968 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.968 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.535 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.536 [2024-10-15 09:07:45.150937] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:27.536 "name": "Existed_Raid", 00:08:27.536 "aliases": [ 00:08:27.536 "80d5c51a-180e-4fd3-8992-74b977929e0c" 00:08:27.536 ], 00:08:27.536 "product_name": "Raid Volume", 00:08:27.536 "block_size": 512, 00:08:27.536 "num_blocks": 196608, 00:08:27.536 "uuid": "80d5c51a-180e-4fd3-8992-74b977929e0c", 00:08:27.536 "assigned_rate_limits": { 00:08:27.536 "rw_ios_per_sec": 0, 00:08:27.536 "rw_mbytes_per_sec": 0, 00:08:27.536 "r_mbytes_per_sec": 0, 00:08:27.536 "w_mbytes_per_sec": 0 00:08:27.536 }, 00:08:27.536 "claimed": false, 00:08:27.536 "zoned": false, 00:08:27.536 "supported_io_types": { 00:08:27.536 "read": true, 00:08:27.536 "write": true, 00:08:27.536 "unmap": true, 00:08:27.536 "flush": true, 00:08:27.536 "reset": true, 00:08:27.536 "nvme_admin": false, 00:08:27.536 "nvme_io": false, 00:08:27.536 "nvme_io_md": false, 00:08:27.536 "write_zeroes": true, 00:08:27.536 "zcopy": false, 00:08:27.536 "get_zone_info": false, 00:08:27.536 "zone_management": false, 00:08:27.536 "zone_append": false, 00:08:27.536 "compare": false, 00:08:27.536 "compare_and_write": false, 00:08:27.536 "abort": false, 00:08:27.536 "seek_hole": false, 00:08:27.536 "seek_data": false, 00:08:27.536 "copy": false, 00:08:27.536 "nvme_iov_md": false 00:08:27.536 }, 00:08:27.536 "memory_domains": [ 00:08:27.536 { 00:08:27.536 "dma_device_id": "system", 00:08:27.536 "dma_device_type": 1 00:08:27.536 }, 00:08:27.536 { 00:08:27.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.536 "dma_device_type": 2 00:08:27.536 }, 00:08:27.536 { 00:08:27.536 "dma_device_id": "system", 00:08:27.536 "dma_device_type": 1 00:08:27.536 }, 00:08:27.536 { 00:08:27.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.536 "dma_device_type": 2 00:08:27.536 }, 00:08:27.536 { 00:08:27.536 "dma_device_id": "system", 00:08:27.536 "dma_device_type": 1 00:08:27.536 }, 00:08:27.536 { 00:08:27.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.536 "dma_device_type": 2 00:08:27.536 } 00:08:27.536 ], 00:08:27.536 "driver_specific": { 00:08:27.536 "raid": { 00:08:27.536 "uuid": "80d5c51a-180e-4fd3-8992-74b977929e0c", 00:08:27.536 "strip_size_kb": 64, 00:08:27.536 "state": "online", 00:08:27.536 "raid_level": "raid0", 00:08:27.536 "superblock": false, 00:08:27.536 "num_base_bdevs": 3, 00:08:27.536 "num_base_bdevs_discovered": 3, 00:08:27.536 "num_base_bdevs_operational": 3, 00:08:27.536 "base_bdevs_list": [ 00:08:27.536 { 00:08:27.536 "name": "NewBaseBdev", 00:08:27.536 "uuid": "96d2311a-93d7-413a-ba25-013fffbef278", 00:08:27.536 "is_configured": true, 00:08:27.536 "data_offset": 0, 00:08:27.536 "data_size": 65536 00:08:27.536 }, 00:08:27.536 { 00:08:27.536 "name": "BaseBdev2", 00:08:27.536 "uuid": "547e6058-a362-447a-b3d1-dc00f08343e2", 00:08:27.536 "is_configured": true, 00:08:27.536 "data_offset": 0, 00:08:27.536 "data_size": 65536 00:08:27.536 }, 00:08:27.536 { 00:08:27.536 "name": "BaseBdev3", 00:08:27.536 "uuid": "a2a97e48-fe83-4448-96e1-14196b91539a", 00:08:27.536 "is_configured": true, 00:08:27.536 "data_offset": 0, 00:08:27.536 "data_size": 65536 00:08:27.536 } 00:08:27.536 ] 00:08:27.536 } 00:08:27.536 } 00:08:27.536 }' 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:27.536 BaseBdev2 00:08:27.536 BaseBdev3' 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.536 [2024-10-15 09:07:45.394190] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:27.536 [2024-10-15 09:07:45.394292] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:27.536 [2024-10-15 09:07:45.394427] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:27.536 [2024-10-15 09:07:45.394540] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:27.536 [2024-10-15 09:07:45.394588] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63875 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 63875 ']' 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 63875 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:27.536 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63875 00:08:27.795 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:27.795 killing process with pid 63875 00:08:27.795 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:27.795 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63875' 00:08:27.795 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 63875 00:08:27.795 [2024-10-15 09:07:45.441020] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:27.795 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 63875 00:08:28.055 [2024-10-15 09:07:45.766144] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:29.431 ************************************ 00:08:29.431 END TEST raid_state_function_test 00:08:29.431 ************************************ 00:08:29.431 09:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:29.431 00:08:29.431 real 0m11.121s 00:08:29.431 user 0m17.767s 00:08:29.431 sys 0m1.837s 00:08:29.431 09:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.431 09:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.431 09:07:46 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:29.431 09:07:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:29.431 09:07:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.431 09:07:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:29.431 ************************************ 00:08:29.431 START TEST raid_state_function_test_sb 00:08:29.431 ************************************ 00:08:29.431 09:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:08:29.431 09:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:29.431 09:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:29.431 09:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:29.431 09:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:29.431 09:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:29.431 09:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:29.431 09:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:29.431 09:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:29.431 09:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:29.431 09:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:29.431 09:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:29.431 09:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:29.431 09:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:29.431 09:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:29.431 09:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:29.431 09:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:29.431 09:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:29.432 09:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:29.432 09:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:29.432 09:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:29.432 09:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:29.432 09:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:29.432 09:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:29.432 09:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:29.432 09:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:29.432 09:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:29.432 09:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64502 00:08:29.432 09:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:29.432 09:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64502' 00:08:29.432 Process raid pid: 64502 00:08:29.432 09:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64502 00:08:29.432 09:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 64502 ']' 00:08:29.432 09:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.432 09:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:29.432 09:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.432 09:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:29.432 09:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.432 [2024-10-15 09:07:47.063129] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:08:29.432 [2024-10-15 09:07:47.063310] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.432 [2024-10-15 09:07:47.229363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.691 [2024-10-15 09:07:47.356894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.691 [2024-10-15 09:07:47.579432] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.691 [2024-10-15 09:07:47.579571] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.270 09:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:30.270 09:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:30.270 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:30.270 09:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.270 09:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.270 [2024-10-15 09:07:47.908739] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:30.270 [2024-10-15 09:07:47.908837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:30.270 [2024-10-15 09:07:47.908883] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:30.270 [2024-10-15 09:07:47.908907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:30.270 [2024-10-15 09:07:47.908926] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:30.270 [2024-10-15 09:07:47.908947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:30.270 09:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.270 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:30.270 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.270 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.270 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:30.270 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.270 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.270 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.270 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.270 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.270 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.270 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.270 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.270 09:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.270 09:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.270 09:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.270 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.270 "name": "Existed_Raid", 00:08:30.270 "uuid": "16f3c8c3-6504-4133-bc30-c5686eacb134", 00:08:30.270 "strip_size_kb": 64, 00:08:30.270 "state": "configuring", 00:08:30.270 "raid_level": "raid0", 00:08:30.270 "superblock": true, 00:08:30.270 "num_base_bdevs": 3, 00:08:30.270 "num_base_bdevs_discovered": 0, 00:08:30.270 "num_base_bdevs_operational": 3, 00:08:30.270 "base_bdevs_list": [ 00:08:30.270 { 00:08:30.270 "name": "BaseBdev1", 00:08:30.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.270 "is_configured": false, 00:08:30.270 "data_offset": 0, 00:08:30.270 "data_size": 0 00:08:30.270 }, 00:08:30.270 { 00:08:30.270 "name": "BaseBdev2", 00:08:30.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.270 "is_configured": false, 00:08:30.270 "data_offset": 0, 00:08:30.270 "data_size": 0 00:08:30.270 }, 00:08:30.270 { 00:08:30.270 "name": "BaseBdev3", 00:08:30.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.270 "is_configured": false, 00:08:30.270 "data_offset": 0, 00:08:30.270 "data_size": 0 00:08:30.270 } 00:08:30.270 ] 00:08:30.270 }' 00:08:30.270 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.270 09:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.530 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:30.530 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.530 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.530 [2024-10-15 09:07:48.327915] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:30.530 [2024-10-15 09:07:48.327954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:30.530 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.530 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:30.530 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.530 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.530 [2024-10-15 09:07:48.339925] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:30.530 [2024-10-15 09:07:48.339983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:30.530 [2024-10-15 09:07:48.339997] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:30.530 [2024-10-15 09:07:48.340011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:30.530 [2024-10-15 09:07:48.340020] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:30.530 [2024-10-15 09:07:48.340033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:30.530 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.530 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:30.530 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.530 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.530 [2024-10-15 09:07:48.390637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:30.530 BaseBdev1 00:08:30.530 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.530 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:30.530 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:30.530 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:30.530 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:30.530 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:30.530 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:30.530 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:30.530 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.530 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.530 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.530 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:30.530 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.530 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.530 [ 00:08:30.530 { 00:08:30.530 "name": "BaseBdev1", 00:08:30.530 "aliases": [ 00:08:30.530 "bb398073-4b76-4072-8f31-c795ee14ae57" 00:08:30.530 ], 00:08:30.530 "product_name": "Malloc disk", 00:08:30.530 "block_size": 512, 00:08:30.530 "num_blocks": 65536, 00:08:30.530 "uuid": "bb398073-4b76-4072-8f31-c795ee14ae57", 00:08:30.530 "assigned_rate_limits": { 00:08:30.530 "rw_ios_per_sec": 0, 00:08:30.530 "rw_mbytes_per_sec": 0, 00:08:30.530 "r_mbytes_per_sec": 0, 00:08:30.530 "w_mbytes_per_sec": 0 00:08:30.530 }, 00:08:30.530 "claimed": true, 00:08:30.530 "claim_type": "exclusive_write", 00:08:30.530 "zoned": false, 00:08:30.791 "supported_io_types": { 00:08:30.791 "read": true, 00:08:30.791 "write": true, 00:08:30.791 "unmap": true, 00:08:30.791 "flush": true, 00:08:30.791 "reset": true, 00:08:30.791 "nvme_admin": false, 00:08:30.791 "nvme_io": false, 00:08:30.791 "nvme_io_md": false, 00:08:30.791 "write_zeroes": true, 00:08:30.791 "zcopy": true, 00:08:30.791 "get_zone_info": false, 00:08:30.791 "zone_management": false, 00:08:30.791 "zone_append": false, 00:08:30.791 "compare": false, 00:08:30.791 "compare_and_write": false, 00:08:30.791 "abort": true, 00:08:30.791 "seek_hole": false, 00:08:30.791 "seek_data": false, 00:08:30.791 "copy": true, 00:08:30.791 "nvme_iov_md": false 00:08:30.791 }, 00:08:30.791 "memory_domains": [ 00:08:30.791 { 00:08:30.791 "dma_device_id": "system", 00:08:30.791 "dma_device_type": 1 00:08:30.791 }, 00:08:30.791 { 00:08:30.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.791 "dma_device_type": 2 00:08:30.791 } 00:08:30.791 ], 00:08:30.791 "driver_specific": {} 00:08:30.791 } 00:08:30.791 ] 00:08:30.791 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.791 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:30.791 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:30.791 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.791 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.791 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:30.791 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.791 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.791 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.791 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.791 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.791 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.791 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.791 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.791 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.791 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.791 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.791 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.791 "name": "Existed_Raid", 00:08:30.791 "uuid": "db88b7e7-fa27-48ff-a841-56303d79d31f", 00:08:30.791 "strip_size_kb": 64, 00:08:30.791 "state": "configuring", 00:08:30.791 "raid_level": "raid0", 00:08:30.791 "superblock": true, 00:08:30.791 "num_base_bdevs": 3, 00:08:30.791 "num_base_bdevs_discovered": 1, 00:08:30.791 "num_base_bdevs_operational": 3, 00:08:30.791 "base_bdevs_list": [ 00:08:30.791 { 00:08:30.791 "name": "BaseBdev1", 00:08:30.791 "uuid": "bb398073-4b76-4072-8f31-c795ee14ae57", 00:08:30.791 "is_configured": true, 00:08:30.791 "data_offset": 2048, 00:08:30.791 "data_size": 63488 00:08:30.791 }, 00:08:30.791 { 00:08:30.791 "name": "BaseBdev2", 00:08:30.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.791 "is_configured": false, 00:08:30.791 "data_offset": 0, 00:08:30.791 "data_size": 0 00:08:30.791 }, 00:08:30.791 { 00:08:30.791 "name": "BaseBdev3", 00:08:30.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.791 "is_configured": false, 00:08:30.791 "data_offset": 0, 00:08:30.791 "data_size": 0 00:08:30.791 } 00:08:30.791 ] 00:08:30.791 }' 00:08:30.791 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.791 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.051 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:31.051 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.051 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.051 [2024-10-15 09:07:48.909812] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:31.051 [2024-10-15 09:07:48.909912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:31.051 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.051 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:31.051 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.051 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.051 [2024-10-15 09:07:48.917848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:31.051 [2024-10-15 09:07:48.919737] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:31.051 [2024-10-15 09:07:48.919810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:31.051 [2024-10-15 09:07:48.919838] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:31.051 [2024-10-15 09:07:48.919860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:31.051 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.051 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:31.051 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:31.051 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:31.051 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.051 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.051 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:31.051 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.051 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.051 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.051 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.051 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.051 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.051 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.051 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.051 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.051 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.052 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.310 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.310 "name": "Existed_Raid", 00:08:31.310 "uuid": "d239ef77-1abb-47ff-bfdb-c36723d85883", 00:08:31.310 "strip_size_kb": 64, 00:08:31.310 "state": "configuring", 00:08:31.310 "raid_level": "raid0", 00:08:31.310 "superblock": true, 00:08:31.310 "num_base_bdevs": 3, 00:08:31.310 "num_base_bdevs_discovered": 1, 00:08:31.310 "num_base_bdevs_operational": 3, 00:08:31.310 "base_bdevs_list": [ 00:08:31.310 { 00:08:31.310 "name": "BaseBdev1", 00:08:31.310 "uuid": "bb398073-4b76-4072-8f31-c795ee14ae57", 00:08:31.310 "is_configured": true, 00:08:31.310 "data_offset": 2048, 00:08:31.310 "data_size": 63488 00:08:31.310 }, 00:08:31.310 { 00:08:31.310 "name": "BaseBdev2", 00:08:31.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.310 "is_configured": false, 00:08:31.310 "data_offset": 0, 00:08:31.310 "data_size": 0 00:08:31.310 }, 00:08:31.310 { 00:08:31.310 "name": "BaseBdev3", 00:08:31.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.310 "is_configured": false, 00:08:31.310 "data_offset": 0, 00:08:31.310 "data_size": 0 00:08:31.310 } 00:08:31.310 ] 00:08:31.310 }' 00:08:31.310 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.310 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.568 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:31.568 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.568 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.568 [2024-10-15 09:07:49.407826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:31.568 BaseBdev2 00:08:31.568 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.568 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:31.569 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:31.569 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:31.569 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:31.569 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:31.569 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:31.569 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:31.569 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.569 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.569 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.569 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:31.569 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.569 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.569 [ 00:08:31.569 { 00:08:31.569 "name": "BaseBdev2", 00:08:31.569 "aliases": [ 00:08:31.569 "39af6f6c-3f0a-4598-a4be-aefa532f964f" 00:08:31.569 ], 00:08:31.569 "product_name": "Malloc disk", 00:08:31.569 "block_size": 512, 00:08:31.569 "num_blocks": 65536, 00:08:31.569 "uuid": "39af6f6c-3f0a-4598-a4be-aefa532f964f", 00:08:31.569 "assigned_rate_limits": { 00:08:31.569 "rw_ios_per_sec": 0, 00:08:31.569 "rw_mbytes_per_sec": 0, 00:08:31.569 "r_mbytes_per_sec": 0, 00:08:31.569 "w_mbytes_per_sec": 0 00:08:31.569 }, 00:08:31.569 "claimed": true, 00:08:31.569 "claim_type": "exclusive_write", 00:08:31.569 "zoned": false, 00:08:31.569 "supported_io_types": { 00:08:31.569 "read": true, 00:08:31.569 "write": true, 00:08:31.569 "unmap": true, 00:08:31.569 "flush": true, 00:08:31.569 "reset": true, 00:08:31.569 "nvme_admin": false, 00:08:31.569 "nvme_io": false, 00:08:31.569 "nvme_io_md": false, 00:08:31.569 "write_zeroes": true, 00:08:31.569 "zcopy": true, 00:08:31.569 "get_zone_info": false, 00:08:31.569 "zone_management": false, 00:08:31.569 "zone_append": false, 00:08:31.569 "compare": false, 00:08:31.569 "compare_and_write": false, 00:08:31.569 "abort": true, 00:08:31.569 "seek_hole": false, 00:08:31.569 "seek_data": false, 00:08:31.569 "copy": true, 00:08:31.569 "nvme_iov_md": false 00:08:31.569 }, 00:08:31.569 "memory_domains": [ 00:08:31.569 { 00:08:31.569 "dma_device_id": "system", 00:08:31.569 "dma_device_type": 1 00:08:31.569 }, 00:08:31.569 { 00:08:31.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.569 "dma_device_type": 2 00:08:31.569 } 00:08:31.569 ], 00:08:31.569 "driver_specific": {} 00:08:31.569 } 00:08:31.569 ] 00:08:31.569 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.569 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:31.569 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:31.569 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:31.569 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:31.569 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.569 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.569 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:31.569 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.569 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.569 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.569 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.569 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.569 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.569 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.569 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.569 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.569 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.829 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.829 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.829 "name": "Existed_Raid", 00:08:31.829 "uuid": "d239ef77-1abb-47ff-bfdb-c36723d85883", 00:08:31.829 "strip_size_kb": 64, 00:08:31.829 "state": "configuring", 00:08:31.829 "raid_level": "raid0", 00:08:31.829 "superblock": true, 00:08:31.829 "num_base_bdevs": 3, 00:08:31.829 "num_base_bdevs_discovered": 2, 00:08:31.829 "num_base_bdevs_operational": 3, 00:08:31.829 "base_bdevs_list": [ 00:08:31.829 { 00:08:31.829 "name": "BaseBdev1", 00:08:31.829 "uuid": "bb398073-4b76-4072-8f31-c795ee14ae57", 00:08:31.829 "is_configured": true, 00:08:31.829 "data_offset": 2048, 00:08:31.829 "data_size": 63488 00:08:31.829 }, 00:08:31.829 { 00:08:31.829 "name": "BaseBdev2", 00:08:31.829 "uuid": "39af6f6c-3f0a-4598-a4be-aefa532f964f", 00:08:31.829 "is_configured": true, 00:08:31.829 "data_offset": 2048, 00:08:31.829 "data_size": 63488 00:08:31.829 }, 00:08:31.829 { 00:08:31.829 "name": "BaseBdev3", 00:08:31.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.829 "is_configured": false, 00:08:31.829 "data_offset": 0, 00:08:31.829 "data_size": 0 00:08:31.829 } 00:08:31.829 ] 00:08:31.829 }' 00:08:31.829 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.829 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.089 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:32.089 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.089 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.089 [2024-10-15 09:07:49.948793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:32.089 [2024-10-15 09:07:49.949165] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:32.089 [2024-10-15 09:07:49.949251] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:32.089 [2024-10-15 09:07:49.949568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:32.089 [2024-10-15 09:07:49.949810] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:32.089 [2024-10-15 09:07:49.949862] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:32.089 BaseBdev3 00:08:32.089 [2024-10-15 09:07:49.950065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.089 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.089 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:32.089 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:32.089 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:32.089 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:32.089 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:32.089 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:32.089 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:32.089 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.089 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.089 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.089 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:32.089 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.089 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.089 [ 00:08:32.089 { 00:08:32.089 "name": "BaseBdev3", 00:08:32.089 "aliases": [ 00:08:32.089 "719badde-af5d-43f7-b990-508a50371547" 00:08:32.089 ], 00:08:32.089 "product_name": "Malloc disk", 00:08:32.089 "block_size": 512, 00:08:32.089 "num_blocks": 65536, 00:08:32.089 "uuid": "719badde-af5d-43f7-b990-508a50371547", 00:08:32.089 "assigned_rate_limits": { 00:08:32.089 "rw_ios_per_sec": 0, 00:08:32.089 "rw_mbytes_per_sec": 0, 00:08:32.089 "r_mbytes_per_sec": 0, 00:08:32.089 "w_mbytes_per_sec": 0 00:08:32.089 }, 00:08:32.089 "claimed": true, 00:08:32.089 "claim_type": "exclusive_write", 00:08:32.089 "zoned": false, 00:08:32.089 "supported_io_types": { 00:08:32.089 "read": true, 00:08:32.089 "write": true, 00:08:32.089 "unmap": true, 00:08:32.089 "flush": true, 00:08:32.089 "reset": true, 00:08:32.089 "nvme_admin": false, 00:08:32.089 "nvme_io": false, 00:08:32.089 "nvme_io_md": false, 00:08:32.089 "write_zeroes": true, 00:08:32.089 "zcopy": true, 00:08:32.089 "get_zone_info": false, 00:08:32.089 "zone_management": false, 00:08:32.089 "zone_append": false, 00:08:32.089 "compare": false, 00:08:32.089 "compare_and_write": false, 00:08:32.089 "abort": true, 00:08:32.089 "seek_hole": false, 00:08:32.089 "seek_data": false, 00:08:32.089 "copy": true, 00:08:32.089 "nvme_iov_md": false 00:08:32.089 }, 00:08:32.089 "memory_domains": [ 00:08:32.089 { 00:08:32.089 "dma_device_id": "system", 00:08:32.089 "dma_device_type": 1 00:08:32.089 }, 00:08:32.089 { 00:08:32.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.089 "dma_device_type": 2 00:08:32.089 } 00:08:32.089 ], 00:08:32.089 "driver_specific": {} 00:08:32.089 } 00:08:32.089 ] 00:08:32.089 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.349 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:32.349 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:32.349 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:32.349 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:32.349 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.349 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:32.349 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.349 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.349 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.349 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.349 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.349 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.349 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.349 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.349 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.349 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.349 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.349 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.349 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.349 "name": "Existed_Raid", 00:08:32.349 "uuid": "d239ef77-1abb-47ff-bfdb-c36723d85883", 00:08:32.349 "strip_size_kb": 64, 00:08:32.349 "state": "online", 00:08:32.349 "raid_level": "raid0", 00:08:32.349 "superblock": true, 00:08:32.349 "num_base_bdevs": 3, 00:08:32.349 "num_base_bdevs_discovered": 3, 00:08:32.349 "num_base_bdevs_operational": 3, 00:08:32.349 "base_bdevs_list": [ 00:08:32.349 { 00:08:32.349 "name": "BaseBdev1", 00:08:32.349 "uuid": "bb398073-4b76-4072-8f31-c795ee14ae57", 00:08:32.349 "is_configured": true, 00:08:32.349 "data_offset": 2048, 00:08:32.349 "data_size": 63488 00:08:32.349 }, 00:08:32.349 { 00:08:32.349 "name": "BaseBdev2", 00:08:32.349 "uuid": "39af6f6c-3f0a-4598-a4be-aefa532f964f", 00:08:32.349 "is_configured": true, 00:08:32.349 "data_offset": 2048, 00:08:32.349 "data_size": 63488 00:08:32.349 }, 00:08:32.349 { 00:08:32.349 "name": "BaseBdev3", 00:08:32.349 "uuid": "719badde-af5d-43f7-b990-508a50371547", 00:08:32.349 "is_configured": true, 00:08:32.349 "data_offset": 2048, 00:08:32.349 "data_size": 63488 00:08:32.349 } 00:08:32.349 ] 00:08:32.349 }' 00:08:32.349 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.349 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.609 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:32.609 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:32.609 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:32.609 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:32.609 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:32.609 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:32.609 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:32.609 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:32.609 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.609 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.609 [2024-10-15 09:07:50.464305] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:32.609 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.609 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:32.609 "name": "Existed_Raid", 00:08:32.609 "aliases": [ 00:08:32.609 "d239ef77-1abb-47ff-bfdb-c36723d85883" 00:08:32.609 ], 00:08:32.609 "product_name": "Raid Volume", 00:08:32.609 "block_size": 512, 00:08:32.609 "num_blocks": 190464, 00:08:32.609 "uuid": "d239ef77-1abb-47ff-bfdb-c36723d85883", 00:08:32.609 "assigned_rate_limits": { 00:08:32.609 "rw_ios_per_sec": 0, 00:08:32.609 "rw_mbytes_per_sec": 0, 00:08:32.609 "r_mbytes_per_sec": 0, 00:08:32.609 "w_mbytes_per_sec": 0 00:08:32.609 }, 00:08:32.609 "claimed": false, 00:08:32.609 "zoned": false, 00:08:32.609 "supported_io_types": { 00:08:32.609 "read": true, 00:08:32.609 "write": true, 00:08:32.609 "unmap": true, 00:08:32.609 "flush": true, 00:08:32.609 "reset": true, 00:08:32.609 "nvme_admin": false, 00:08:32.609 "nvme_io": false, 00:08:32.609 "nvme_io_md": false, 00:08:32.609 "write_zeroes": true, 00:08:32.609 "zcopy": false, 00:08:32.609 "get_zone_info": false, 00:08:32.609 "zone_management": false, 00:08:32.609 "zone_append": false, 00:08:32.609 "compare": false, 00:08:32.609 "compare_and_write": false, 00:08:32.609 "abort": false, 00:08:32.609 "seek_hole": false, 00:08:32.609 "seek_data": false, 00:08:32.609 "copy": false, 00:08:32.609 "nvme_iov_md": false 00:08:32.609 }, 00:08:32.609 "memory_domains": [ 00:08:32.609 { 00:08:32.609 "dma_device_id": "system", 00:08:32.609 "dma_device_type": 1 00:08:32.609 }, 00:08:32.609 { 00:08:32.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.609 "dma_device_type": 2 00:08:32.609 }, 00:08:32.609 { 00:08:32.609 "dma_device_id": "system", 00:08:32.609 "dma_device_type": 1 00:08:32.609 }, 00:08:32.609 { 00:08:32.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.609 "dma_device_type": 2 00:08:32.609 }, 00:08:32.609 { 00:08:32.609 "dma_device_id": "system", 00:08:32.609 "dma_device_type": 1 00:08:32.609 }, 00:08:32.609 { 00:08:32.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.609 "dma_device_type": 2 00:08:32.609 } 00:08:32.609 ], 00:08:32.609 "driver_specific": { 00:08:32.609 "raid": { 00:08:32.609 "uuid": "d239ef77-1abb-47ff-bfdb-c36723d85883", 00:08:32.609 "strip_size_kb": 64, 00:08:32.609 "state": "online", 00:08:32.609 "raid_level": "raid0", 00:08:32.609 "superblock": true, 00:08:32.609 "num_base_bdevs": 3, 00:08:32.609 "num_base_bdevs_discovered": 3, 00:08:32.609 "num_base_bdevs_operational": 3, 00:08:32.609 "base_bdevs_list": [ 00:08:32.609 { 00:08:32.609 "name": "BaseBdev1", 00:08:32.609 "uuid": "bb398073-4b76-4072-8f31-c795ee14ae57", 00:08:32.609 "is_configured": true, 00:08:32.609 "data_offset": 2048, 00:08:32.609 "data_size": 63488 00:08:32.609 }, 00:08:32.609 { 00:08:32.609 "name": "BaseBdev2", 00:08:32.609 "uuid": "39af6f6c-3f0a-4598-a4be-aefa532f964f", 00:08:32.609 "is_configured": true, 00:08:32.609 "data_offset": 2048, 00:08:32.609 "data_size": 63488 00:08:32.609 }, 00:08:32.609 { 00:08:32.609 "name": "BaseBdev3", 00:08:32.609 "uuid": "719badde-af5d-43f7-b990-508a50371547", 00:08:32.609 "is_configured": true, 00:08:32.609 "data_offset": 2048, 00:08:32.609 "data_size": 63488 00:08:32.609 } 00:08:32.609 ] 00:08:32.609 } 00:08:32.609 } 00:08:32.609 }' 00:08:32.609 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:32.869 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:32.869 BaseBdev2 00:08:32.869 BaseBdev3' 00:08:32.869 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.869 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:32.869 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.869 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:32.869 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.869 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.869 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.869 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.869 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.869 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.869 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.869 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:32.869 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.869 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.869 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.869 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.869 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.869 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.869 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.869 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.869 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:32.869 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.869 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.869 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.869 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.869 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.869 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:32.869 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.869 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.869 [2024-10-15 09:07:50.727590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:32.869 [2024-10-15 09:07:50.727620] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:32.869 [2024-10-15 09:07:50.727702] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:33.129 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.129 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:33.129 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:33.129 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:33.129 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:33.129 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:33.129 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:33.129 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.129 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:33.129 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.129 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.129 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:33.129 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.129 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.129 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.129 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.129 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.129 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.129 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.129 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.129 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.129 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.129 "name": "Existed_Raid", 00:08:33.129 "uuid": "d239ef77-1abb-47ff-bfdb-c36723d85883", 00:08:33.129 "strip_size_kb": 64, 00:08:33.129 "state": "offline", 00:08:33.129 "raid_level": "raid0", 00:08:33.129 "superblock": true, 00:08:33.129 "num_base_bdevs": 3, 00:08:33.129 "num_base_bdevs_discovered": 2, 00:08:33.129 "num_base_bdevs_operational": 2, 00:08:33.129 "base_bdevs_list": [ 00:08:33.129 { 00:08:33.129 "name": null, 00:08:33.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.129 "is_configured": false, 00:08:33.129 "data_offset": 0, 00:08:33.129 "data_size": 63488 00:08:33.129 }, 00:08:33.129 { 00:08:33.129 "name": "BaseBdev2", 00:08:33.129 "uuid": "39af6f6c-3f0a-4598-a4be-aefa532f964f", 00:08:33.129 "is_configured": true, 00:08:33.129 "data_offset": 2048, 00:08:33.129 "data_size": 63488 00:08:33.129 }, 00:08:33.129 { 00:08:33.129 "name": "BaseBdev3", 00:08:33.129 "uuid": "719badde-af5d-43f7-b990-508a50371547", 00:08:33.129 "is_configured": true, 00:08:33.129 "data_offset": 2048, 00:08:33.129 "data_size": 63488 00:08:33.129 } 00:08:33.129 ] 00:08:33.129 }' 00:08:33.129 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.129 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.697 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:33.697 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:33.697 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.697 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.697 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.697 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:33.697 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.697 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:33.697 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:33.697 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:33.697 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.697 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.697 [2024-10-15 09:07:51.355731] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:33.697 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.697 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:33.697 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:33.697 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:33.697 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.697 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.697 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.697 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.697 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:33.697 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:33.697 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:33.697 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.697 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.697 [2024-10-15 09:07:51.512936] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:33.697 [2024-10-15 09:07:51.512993] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.958 BaseBdev2 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.958 [ 00:08:33.958 { 00:08:33.958 "name": "BaseBdev2", 00:08:33.958 "aliases": [ 00:08:33.958 "8522bc1f-03fb-432d-b340-9cfa7ea17a21" 00:08:33.958 ], 00:08:33.958 "product_name": "Malloc disk", 00:08:33.958 "block_size": 512, 00:08:33.958 "num_blocks": 65536, 00:08:33.958 "uuid": "8522bc1f-03fb-432d-b340-9cfa7ea17a21", 00:08:33.958 "assigned_rate_limits": { 00:08:33.958 "rw_ios_per_sec": 0, 00:08:33.958 "rw_mbytes_per_sec": 0, 00:08:33.958 "r_mbytes_per_sec": 0, 00:08:33.958 "w_mbytes_per_sec": 0 00:08:33.958 }, 00:08:33.958 "claimed": false, 00:08:33.958 "zoned": false, 00:08:33.958 "supported_io_types": { 00:08:33.958 "read": true, 00:08:33.958 "write": true, 00:08:33.958 "unmap": true, 00:08:33.958 "flush": true, 00:08:33.958 "reset": true, 00:08:33.958 "nvme_admin": false, 00:08:33.958 "nvme_io": false, 00:08:33.958 "nvme_io_md": false, 00:08:33.958 "write_zeroes": true, 00:08:33.958 "zcopy": true, 00:08:33.958 "get_zone_info": false, 00:08:33.958 "zone_management": false, 00:08:33.958 "zone_append": false, 00:08:33.958 "compare": false, 00:08:33.958 "compare_and_write": false, 00:08:33.958 "abort": true, 00:08:33.958 "seek_hole": false, 00:08:33.958 "seek_data": false, 00:08:33.958 "copy": true, 00:08:33.958 "nvme_iov_md": false 00:08:33.958 }, 00:08:33.958 "memory_domains": [ 00:08:33.958 { 00:08:33.958 "dma_device_id": "system", 00:08:33.958 "dma_device_type": 1 00:08:33.958 }, 00:08:33.958 { 00:08:33.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.958 "dma_device_type": 2 00:08:33.958 } 00:08:33.958 ], 00:08:33.958 "driver_specific": {} 00:08:33.958 } 00:08:33.958 ] 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.958 BaseBdev3 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.958 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.958 [ 00:08:33.958 { 00:08:33.958 "name": "BaseBdev3", 00:08:33.958 "aliases": [ 00:08:33.958 "f7472ef6-dd60-46c6-bdc3-cc8c9cca1aa3" 00:08:33.958 ], 00:08:33.958 "product_name": "Malloc disk", 00:08:33.958 "block_size": 512, 00:08:33.958 "num_blocks": 65536, 00:08:33.958 "uuid": "f7472ef6-dd60-46c6-bdc3-cc8c9cca1aa3", 00:08:33.958 "assigned_rate_limits": { 00:08:33.958 "rw_ios_per_sec": 0, 00:08:33.958 "rw_mbytes_per_sec": 0, 00:08:33.958 "r_mbytes_per_sec": 0, 00:08:33.958 "w_mbytes_per_sec": 0 00:08:33.958 }, 00:08:33.958 "claimed": false, 00:08:33.958 "zoned": false, 00:08:33.958 "supported_io_types": { 00:08:33.958 "read": true, 00:08:33.958 "write": true, 00:08:33.958 "unmap": true, 00:08:33.959 "flush": true, 00:08:33.959 "reset": true, 00:08:33.959 "nvme_admin": false, 00:08:33.959 "nvme_io": false, 00:08:33.959 "nvme_io_md": false, 00:08:33.959 "write_zeroes": true, 00:08:33.959 "zcopy": true, 00:08:33.959 "get_zone_info": false, 00:08:33.959 "zone_management": false, 00:08:33.959 "zone_append": false, 00:08:33.959 "compare": false, 00:08:33.959 "compare_and_write": false, 00:08:33.959 "abort": true, 00:08:33.959 "seek_hole": false, 00:08:33.959 "seek_data": false, 00:08:33.959 "copy": true, 00:08:33.959 "nvme_iov_md": false 00:08:33.959 }, 00:08:33.959 "memory_domains": [ 00:08:33.959 { 00:08:33.959 "dma_device_id": "system", 00:08:33.959 "dma_device_type": 1 00:08:33.959 }, 00:08:33.959 { 00:08:33.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.959 "dma_device_type": 2 00:08:33.959 } 00:08:33.959 ], 00:08:33.959 "driver_specific": {} 00:08:33.959 } 00:08:33.959 ] 00:08:33.959 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.959 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:33.959 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:33.959 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:33.959 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:33.959 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.959 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.959 [2024-10-15 09:07:51.834996] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:33.959 [2024-10-15 09:07:51.835123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:33.959 [2024-10-15 09:07:51.835176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:33.959 [2024-10-15 09:07:51.837031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:33.959 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.959 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:33.959 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.959 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.959 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.959 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.959 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.959 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.959 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.959 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.959 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.959 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.959 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.959 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.959 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.219 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.219 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.219 "name": "Existed_Raid", 00:08:34.219 "uuid": "c1fd0c40-0ac3-4eb5-b431-0ac8c3df3874", 00:08:34.219 "strip_size_kb": 64, 00:08:34.220 "state": "configuring", 00:08:34.220 "raid_level": "raid0", 00:08:34.220 "superblock": true, 00:08:34.220 "num_base_bdevs": 3, 00:08:34.220 "num_base_bdevs_discovered": 2, 00:08:34.220 "num_base_bdevs_operational": 3, 00:08:34.220 "base_bdevs_list": [ 00:08:34.220 { 00:08:34.220 "name": "BaseBdev1", 00:08:34.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.220 "is_configured": false, 00:08:34.220 "data_offset": 0, 00:08:34.220 "data_size": 0 00:08:34.220 }, 00:08:34.220 { 00:08:34.220 "name": "BaseBdev2", 00:08:34.220 "uuid": "8522bc1f-03fb-432d-b340-9cfa7ea17a21", 00:08:34.220 "is_configured": true, 00:08:34.220 "data_offset": 2048, 00:08:34.220 "data_size": 63488 00:08:34.220 }, 00:08:34.220 { 00:08:34.220 "name": "BaseBdev3", 00:08:34.220 "uuid": "f7472ef6-dd60-46c6-bdc3-cc8c9cca1aa3", 00:08:34.220 "is_configured": true, 00:08:34.220 "data_offset": 2048, 00:08:34.220 "data_size": 63488 00:08:34.220 } 00:08:34.220 ] 00:08:34.220 }' 00:08:34.220 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.220 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.480 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:34.480 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.480 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.480 [2024-10-15 09:07:52.286197] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:34.480 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.480 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:34.480 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.480 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.480 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:34.480 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.480 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.480 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.480 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.480 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.480 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.480 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.480 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.480 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.480 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.480 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.480 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.480 "name": "Existed_Raid", 00:08:34.480 "uuid": "c1fd0c40-0ac3-4eb5-b431-0ac8c3df3874", 00:08:34.480 "strip_size_kb": 64, 00:08:34.480 "state": "configuring", 00:08:34.480 "raid_level": "raid0", 00:08:34.480 "superblock": true, 00:08:34.480 "num_base_bdevs": 3, 00:08:34.480 "num_base_bdevs_discovered": 1, 00:08:34.480 "num_base_bdevs_operational": 3, 00:08:34.480 "base_bdevs_list": [ 00:08:34.480 { 00:08:34.480 "name": "BaseBdev1", 00:08:34.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.480 "is_configured": false, 00:08:34.480 "data_offset": 0, 00:08:34.480 "data_size": 0 00:08:34.480 }, 00:08:34.480 { 00:08:34.480 "name": null, 00:08:34.480 "uuid": "8522bc1f-03fb-432d-b340-9cfa7ea17a21", 00:08:34.480 "is_configured": false, 00:08:34.480 "data_offset": 0, 00:08:34.480 "data_size": 63488 00:08:34.480 }, 00:08:34.480 { 00:08:34.480 "name": "BaseBdev3", 00:08:34.480 "uuid": "f7472ef6-dd60-46c6-bdc3-cc8c9cca1aa3", 00:08:34.480 "is_configured": true, 00:08:34.480 "data_offset": 2048, 00:08:34.480 "data_size": 63488 00:08:34.480 } 00:08:34.480 ] 00:08:34.480 }' 00:08:34.480 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.480 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.049 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.049 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.049 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.049 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:35.049 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.049 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:35.049 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:35.049 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.049 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.049 [2024-10-15 09:07:52.879893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:35.049 BaseBdev1 00:08:35.049 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.049 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:35.049 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:35.049 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:35.049 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:35.049 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:35.049 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:35.049 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:35.049 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.050 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.050 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.050 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:35.050 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.050 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.050 [ 00:08:35.050 { 00:08:35.050 "name": "BaseBdev1", 00:08:35.050 "aliases": [ 00:08:35.050 "b19dea63-83e3-4086-9309-e045ce75db05" 00:08:35.050 ], 00:08:35.050 "product_name": "Malloc disk", 00:08:35.050 "block_size": 512, 00:08:35.050 "num_blocks": 65536, 00:08:35.050 "uuid": "b19dea63-83e3-4086-9309-e045ce75db05", 00:08:35.050 "assigned_rate_limits": { 00:08:35.050 "rw_ios_per_sec": 0, 00:08:35.050 "rw_mbytes_per_sec": 0, 00:08:35.050 "r_mbytes_per_sec": 0, 00:08:35.050 "w_mbytes_per_sec": 0 00:08:35.050 }, 00:08:35.050 "claimed": true, 00:08:35.050 "claim_type": "exclusive_write", 00:08:35.050 "zoned": false, 00:08:35.050 "supported_io_types": { 00:08:35.050 "read": true, 00:08:35.050 "write": true, 00:08:35.050 "unmap": true, 00:08:35.050 "flush": true, 00:08:35.050 "reset": true, 00:08:35.050 "nvme_admin": false, 00:08:35.050 "nvme_io": false, 00:08:35.050 "nvme_io_md": false, 00:08:35.050 "write_zeroes": true, 00:08:35.050 "zcopy": true, 00:08:35.050 "get_zone_info": false, 00:08:35.050 "zone_management": false, 00:08:35.050 "zone_append": false, 00:08:35.050 "compare": false, 00:08:35.050 "compare_and_write": false, 00:08:35.050 "abort": true, 00:08:35.050 "seek_hole": false, 00:08:35.050 "seek_data": false, 00:08:35.050 "copy": true, 00:08:35.050 "nvme_iov_md": false 00:08:35.050 }, 00:08:35.050 "memory_domains": [ 00:08:35.050 { 00:08:35.050 "dma_device_id": "system", 00:08:35.050 "dma_device_type": 1 00:08:35.050 }, 00:08:35.050 { 00:08:35.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.050 "dma_device_type": 2 00:08:35.050 } 00:08:35.050 ], 00:08:35.050 "driver_specific": {} 00:08:35.050 } 00:08:35.050 ] 00:08:35.050 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.050 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:35.050 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:35.050 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.050 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.050 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.050 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.050 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.050 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.050 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.050 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.050 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.050 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.050 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.050 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.050 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.050 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.311 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.311 "name": "Existed_Raid", 00:08:35.311 "uuid": "c1fd0c40-0ac3-4eb5-b431-0ac8c3df3874", 00:08:35.311 "strip_size_kb": 64, 00:08:35.311 "state": "configuring", 00:08:35.311 "raid_level": "raid0", 00:08:35.311 "superblock": true, 00:08:35.311 "num_base_bdevs": 3, 00:08:35.311 "num_base_bdevs_discovered": 2, 00:08:35.311 "num_base_bdevs_operational": 3, 00:08:35.311 "base_bdevs_list": [ 00:08:35.311 { 00:08:35.311 "name": "BaseBdev1", 00:08:35.311 "uuid": "b19dea63-83e3-4086-9309-e045ce75db05", 00:08:35.311 "is_configured": true, 00:08:35.311 "data_offset": 2048, 00:08:35.311 "data_size": 63488 00:08:35.311 }, 00:08:35.311 { 00:08:35.311 "name": null, 00:08:35.311 "uuid": "8522bc1f-03fb-432d-b340-9cfa7ea17a21", 00:08:35.311 "is_configured": false, 00:08:35.311 "data_offset": 0, 00:08:35.311 "data_size": 63488 00:08:35.311 }, 00:08:35.311 { 00:08:35.311 "name": "BaseBdev3", 00:08:35.311 "uuid": "f7472ef6-dd60-46c6-bdc3-cc8c9cca1aa3", 00:08:35.311 "is_configured": true, 00:08:35.311 "data_offset": 2048, 00:08:35.311 "data_size": 63488 00:08:35.311 } 00:08:35.311 ] 00:08:35.311 }' 00:08:35.311 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.311 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.575 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.575 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.575 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.575 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:35.575 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.576 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:35.576 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:35.576 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.576 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.576 [2024-10-15 09:07:53.447012] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:35.576 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.576 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:35.576 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.576 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.576 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.576 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.576 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.576 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.576 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.576 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.576 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.576 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.576 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.576 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.576 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.839 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.839 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.839 "name": "Existed_Raid", 00:08:35.839 "uuid": "c1fd0c40-0ac3-4eb5-b431-0ac8c3df3874", 00:08:35.839 "strip_size_kb": 64, 00:08:35.839 "state": "configuring", 00:08:35.839 "raid_level": "raid0", 00:08:35.839 "superblock": true, 00:08:35.839 "num_base_bdevs": 3, 00:08:35.839 "num_base_bdevs_discovered": 1, 00:08:35.839 "num_base_bdevs_operational": 3, 00:08:35.839 "base_bdevs_list": [ 00:08:35.839 { 00:08:35.839 "name": "BaseBdev1", 00:08:35.839 "uuid": "b19dea63-83e3-4086-9309-e045ce75db05", 00:08:35.839 "is_configured": true, 00:08:35.839 "data_offset": 2048, 00:08:35.839 "data_size": 63488 00:08:35.839 }, 00:08:35.839 { 00:08:35.839 "name": null, 00:08:35.839 "uuid": "8522bc1f-03fb-432d-b340-9cfa7ea17a21", 00:08:35.839 "is_configured": false, 00:08:35.839 "data_offset": 0, 00:08:35.839 "data_size": 63488 00:08:35.839 }, 00:08:35.839 { 00:08:35.839 "name": null, 00:08:35.839 "uuid": "f7472ef6-dd60-46c6-bdc3-cc8c9cca1aa3", 00:08:35.839 "is_configured": false, 00:08:35.839 "data_offset": 0, 00:08:35.839 "data_size": 63488 00:08:35.839 } 00:08:35.839 ] 00:08:35.839 }' 00:08:35.839 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.839 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.098 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.098 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.098 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.098 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:36.098 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.098 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:36.098 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:36.098 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.098 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.098 [2024-10-15 09:07:53.934296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:36.098 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.098 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:36.098 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.098 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.098 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:36.098 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.098 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.098 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.098 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.098 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.098 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.098 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.098 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.098 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.098 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.098 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.357 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.357 "name": "Existed_Raid", 00:08:36.357 "uuid": "c1fd0c40-0ac3-4eb5-b431-0ac8c3df3874", 00:08:36.358 "strip_size_kb": 64, 00:08:36.358 "state": "configuring", 00:08:36.358 "raid_level": "raid0", 00:08:36.358 "superblock": true, 00:08:36.358 "num_base_bdevs": 3, 00:08:36.358 "num_base_bdevs_discovered": 2, 00:08:36.358 "num_base_bdevs_operational": 3, 00:08:36.358 "base_bdevs_list": [ 00:08:36.358 { 00:08:36.358 "name": "BaseBdev1", 00:08:36.358 "uuid": "b19dea63-83e3-4086-9309-e045ce75db05", 00:08:36.358 "is_configured": true, 00:08:36.358 "data_offset": 2048, 00:08:36.358 "data_size": 63488 00:08:36.358 }, 00:08:36.358 { 00:08:36.358 "name": null, 00:08:36.358 "uuid": "8522bc1f-03fb-432d-b340-9cfa7ea17a21", 00:08:36.358 "is_configured": false, 00:08:36.358 "data_offset": 0, 00:08:36.358 "data_size": 63488 00:08:36.358 }, 00:08:36.358 { 00:08:36.358 "name": "BaseBdev3", 00:08:36.358 "uuid": "f7472ef6-dd60-46c6-bdc3-cc8c9cca1aa3", 00:08:36.358 "is_configured": true, 00:08:36.358 "data_offset": 2048, 00:08:36.358 "data_size": 63488 00:08:36.358 } 00:08:36.358 ] 00:08:36.358 }' 00:08:36.358 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.358 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.616 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.616 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:36.616 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.616 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.616 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.616 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:36.616 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:36.616 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.616 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.616 [2024-10-15 09:07:54.449459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:36.875 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.875 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:36.875 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.875 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.875 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:36.875 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.875 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.875 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.875 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.875 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.875 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.875 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.875 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.875 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.875 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.875 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.875 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.875 "name": "Existed_Raid", 00:08:36.875 "uuid": "c1fd0c40-0ac3-4eb5-b431-0ac8c3df3874", 00:08:36.875 "strip_size_kb": 64, 00:08:36.875 "state": "configuring", 00:08:36.875 "raid_level": "raid0", 00:08:36.875 "superblock": true, 00:08:36.875 "num_base_bdevs": 3, 00:08:36.875 "num_base_bdevs_discovered": 1, 00:08:36.875 "num_base_bdevs_operational": 3, 00:08:36.875 "base_bdevs_list": [ 00:08:36.875 { 00:08:36.875 "name": null, 00:08:36.875 "uuid": "b19dea63-83e3-4086-9309-e045ce75db05", 00:08:36.875 "is_configured": false, 00:08:36.875 "data_offset": 0, 00:08:36.875 "data_size": 63488 00:08:36.875 }, 00:08:36.875 { 00:08:36.875 "name": null, 00:08:36.875 "uuid": "8522bc1f-03fb-432d-b340-9cfa7ea17a21", 00:08:36.875 "is_configured": false, 00:08:36.875 "data_offset": 0, 00:08:36.875 "data_size": 63488 00:08:36.875 }, 00:08:36.875 { 00:08:36.875 "name": "BaseBdev3", 00:08:36.875 "uuid": "f7472ef6-dd60-46c6-bdc3-cc8c9cca1aa3", 00:08:36.875 "is_configured": true, 00:08:36.875 "data_offset": 2048, 00:08:36.875 "data_size": 63488 00:08:36.875 } 00:08:36.875 ] 00:08:36.875 }' 00:08:36.875 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.875 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.134 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.134 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.134 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.134 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:37.393 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.393 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:37.393 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:37.393 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.393 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.393 [2024-10-15 09:07:55.078293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:37.393 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.393 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:37.393 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.393 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.393 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.393 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.393 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.393 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.393 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.393 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.393 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.393 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.393 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.393 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.393 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.393 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.393 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.393 "name": "Existed_Raid", 00:08:37.393 "uuid": "c1fd0c40-0ac3-4eb5-b431-0ac8c3df3874", 00:08:37.393 "strip_size_kb": 64, 00:08:37.393 "state": "configuring", 00:08:37.393 "raid_level": "raid0", 00:08:37.393 "superblock": true, 00:08:37.393 "num_base_bdevs": 3, 00:08:37.393 "num_base_bdevs_discovered": 2, 00:08:37.393 "num_base_bdevs_operational": 3, 00:08:37.393 "base_bdevs_list": [ 00:08:37.393 { 00:08:37.393 "name": null, 00:08:37.393 "uuid": "b19dea63-83e3-4086-9309-e045ce75db05", 00:08:37.393 "is_configured": false, 00:08:37.393 "data_offset": 0, 00:08:37.393 "data_size": 63488 00:08:37.393 }, 00:08:37.393 { 00:08:37.393 "name": "BaseBdev2", 00:08:37.393 "uuid": "8522bc1f-03fb-432d-b340-9cfa7ea17a21", 00:08:37.393 "is_configured": true, 00:08:37.393 "data_offset": 2048, 00:08:37.393 "data_size": 63488 00:08:37.393 }, 00:08:37.393 { 00:08:37.393 "name": "BaseBdev3", 00:08:37.393 "uuid": "f7472ef6-dd60-46c6-bdc3-cc8c9cca1aa3", 00:08:37.393 "is_configured": true, 00:08:37.393 "data_offset": 2048, 00:08:37.393 "data_size": 63488 00:08:37.393 } 00:08:37.393 ] 00:08:37.393 }' 00:08:37.393 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.393 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.963 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:37.963 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.963 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.963 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.963 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.963 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:37.963 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.963 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:37.963 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.963 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.963 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.963 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b19dea63-83e3-4086-9309-e045ce75db05 00:08:37.963 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.963 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.963 [2024-10-15 09:07:55.671943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:37.963 [2024-10-15 09:07:55.672286] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:37.963 [2024-10-15 09:07:55.672311] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:37.963 [2024-10-15 09:07:55.672588] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:37.963 [2024-10-15 09:07:55.672758] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:37.963 [2024-10-15 09:07:55.672770] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:37.963 [2024-10-15 09:07:55.672924] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:37.963 NewBaseBdev 00:08:37.963 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.963 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:37.963 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:37.963 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:37.963 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:37.963 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:37.963 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:37.963 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:37.963 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.963 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.963 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.963 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:37.963 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.963 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.963 [ 00:08:37.963 { 00:08:37.963 "name": "NewBaseBdev", 00:08:37.964 "aliases": [ 00:08:37.964 "b19dea63-83e3-4086-9309-e045ce75db05" 00:08:37.964 ], 00:08:37.964 "product_name": "Malloc disk", 00:08:37.964 "block_size": 512, 00:08:37.964 "num_blocks": 65536, 00:08:37.964 "uuid": "b19dea63-83e3-4086-9309-e045ce75db05", 00:08:37.964 "assigned_rate_limits": { 00:08:37.964 "rw_ios_per_sec": 0, 00:08:37.964 "rw_mbytes_per_sec": 0, 00:08:37.964 "r_mbytes_per_sec": 0, 00:08:37.964 "w_mbytes_per_sec": 0 00:08:37.964 }, 00:08:37.964 "claimed": true, 00:08:37.964 "claim_type": "exclusive_write", 00:08:37.964 "zoned": false, 00:08:37.964 "supported_io_types": { 00:08:37.964 "read": true, 00:08:37.964 "write": true, 00:08:37.964 "unmap": true, 00:08:37.964 "flush": true, 00:08:37.964 "reset": true, 00:08:37.964 "nvme_admin": false, 00:08:37.964 "nvme_io": false, 00:08:37.964 "nvme_io_md": false, 00:08:37.964 "write_zeroes": true, 00:08:37.964 "zcopy": true, 00:08:37.964 "get_zone_info": false, 00:08:37.964 "zone_management": false, 00:08:37.964 "zone_append": false, 00:08:37.964 "compare": false, 00:08:37.964 "compare_and_write": false, 00:08:37.964 "abort": true, 00:08:37.964 "seek_hole": false, 00:08:37.964 "seek_data": false, 00:08:37.964 "copy": true, 00:08:37.964 "nvme_iov_md": false 00:08:37.964 }, 00:08:37.964 "memory_domains": [ 00:08:37.964 { 00:08:37.964 "dma_device_id": "system", 00:08:37.964 "dma_device_type": 1 00:08:37.964 }, 00:08:37.964 { 00:08:37.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.964 "dma_device_type": 2 00:08:37.964 } 00:08:37.964 ], 00:08:37.964 "driver_specific": {} 00:08:37.964 } 00:08:37.964 ] 00:08:37.964 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.964 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:37.964 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:37.964 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.964 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:37.964 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.964 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.964 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.964 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.964 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.964 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.964 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.964 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.964 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.964 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.964 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.964 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.964 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.964 "name": "Existed_Raid", 00:08:37.964 "uuid": "c1fd0c40-0ac3-4eb5-b431-0ac8c3df3874", 00:08:37.964 "strip_size_kb": 64, 00:08:37.964 "state": "online", 00:08:37.964 "raid_level": "raid0", 00:08:37.964 "superblock": true, 00:08:37.964 "num_base_bdevs": 3, 00:08:37.964 "num_base_bdevs_discovered": 3, 00:08:37.964 "num_base_bdevs_operational": 3, 00:08:37.964 "base_bdevs_list": [ 00:08:37.964 { 00:08:37.964 "name": "NewBaseBdev", 00:08:37.964 "uuid": "b19dea63-83e3-4086-9309-e045ce75db05", 00:08:37.964 "is_configured": true, 00:08:37.964 "data_offset": 2048, 00:08:37.964 "data_size": 63488 00:08:37.964 }, 00:08:37.964 { 00:08:37.964 "name": "BaseBdev2", 00:08:37.964 "uuid": "8522bc1f-03fb-432d-b340-9cfa7ea17a21", 00:08:37.964 "is_configured": true, 00:08:37.964 "data_offset": 2048, 00:08:37.964 "data_size": 63488 00:08:37.964 }, 00:08:37.964 { 00:08:37.964 "name": "BaseBdev3", 00:08:37.964 "uuid": "f7472ef6-dd60-46c6-bdc3-cc8c9cca1aa3", 00:08:37.964 "is_configured": true, 00:08:37.964 "data_offset": 2048, 00:08:37.964 "data_size": 63488 00:08:37.964 } 00:08:37.964 ] 00:08:37.964 }' 00:08:37.964 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.964 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.532 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:38.532 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:38.532 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:38.532 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:38.532 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:38.532 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:38.532 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:38.532 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:38.532 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.532 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.532 [2024-10-15 09:07:56.207480] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:38.532 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.532 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:38.532 "name": "Existed_Raid", 00:08:38.532 "aliases": [ 00:08:38.532 "c1fd0c40-0ac3-4eb5-b431-0ac8c3df3874" 00:08:38.532 ], 00:08:38.532 "product_name": "Raid Volume", 00:08:38.532 "block_size": 512, 00:08:38.532 "num_blocks": 190464, 00:08:38.532 "uuid": "c1fd0c40-0ac3-4eb5-b431-0ac8c3df3874", 00:08:38.532 "assigned_rate_limits": { 00:08:38.532 "rw_ios_per_sec": 0, 00:08:38.532 "rw_mbytes_per_sec": 0, 00:08:38.532 "r_mbytes_per_sec": 0, 00:08:38.532 "w_mbytes_per_sec": 0 00:08:38.532 }, 00:08:38.532 "claimed": false, 00:08:38.532 "zoned": false, 00:08:38.532 "supported_io_types": { 00:08:38.533 "read": true, 00:08:38.533 "write": true, 00:08:38.533 "unmap": true, 00:08:38.533 "flush": true, 00:08:38.533 "reset": true, 00:08:38.533 "nvme_admin": false, 00:08:38.533 "nvme_io": false, 00:08:38.533 "nvme_io_md": false, 00:08:38.533 "write_zeroes": true, 00:08:38.533 "zcopy": false, 00:08:38.533 "get_zone_info": false, 00:08:38.533 "zone_management": false, 00:08:38.533 "zone_append": false, 00:08:38.533 "compare": false, 00:08:38.533 "compare_and_write": false, 00:08:38.533 "abort": false, 00:08:38.533 "seek_hole": false, 00:08:38.533 "seek_data": false, 00:08:38.533 "copy": false, 00:08:38.533 "nvme_iov_md": false 00:08:38.533 }, 00:08:38.533 "memory_domains": [ 00:08:38.533 { 00:08:38.533 "dma_device_id": "system", 00:08:38.533 "dma_device_type": 1 00:08:38.533 }, 00:08:38.533 { 00:08:38.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.533 "dma_device_type": 2 00:08:38.533 }, 00:08:38.533 { 00:08:38.533 "dma_device_id": "system", 00:08:38.533 "dma_device_type": 1 00:08:38.533 }, 00:08:38.533 { 00:08:38.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.533 "dma_device_type": 2 00:08:38.533 }, 00:08:38.533 { 00:08:38.533 "dma_device_id": "system", 00:08:38.533 "dma_device_type": 1 00:08:38.533 }, 00:08:38.533 { 00:08:38.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.533 "dma_device_type": 2 00:08:38.533 } 00:08:38.533 ], 00:08:38.533 "driver_specific": { 00:08:38.533 "raid": { 00:08:38.533 "uuid": "c1fd0c40-0ac3-4eb5-b431-0ac8c3df3874", 00:08:38.533 "strip_size_kb": 64, 00:08:38.533 "state": "online", 00:08:38.533 "raid_level": "raid0", 00:08:38.533 "superblock": true, 00:08:38.533 "num_base_bdevs": 3, 00:08:38.533 "num_base_bdevs_discovered": 3, 00:08:38.533 "num_base_bdevs_operational": 3, 00:08:38.533 "base_bdevs_list": [ 00:08:38.533 { 00:08:38.533 "name": "NewBaseBdev", 00:08:38.533 "uuid": "b19dea63-83e3-4086-9309-e045ce75db05", 00:08:38.533 "is_configured": true, 00:08:38.533 "data_offset": 2048, 00:08:38.533 "data_size": 63488 00:08:38.533 }, 00:08:38.533 { 00:08:38.533 "name": "BaseBdev2", 00:08:38.533 "uuid": "8522bc1f-03fb-432d-b340-9cfa7ea17a21", 00:08:38.533 "is_configured": true, 00:08:38.533 "data_offset": 2048, 00:08:38.533 "data_size": 63488 00:08:38.533 }, 00:08:38.533 { 00:08:38.533 "name": "BaseBdev3", 00:08:38.533 "uuid": "f7472ef6-dd60-46c6-bdc3-cc8c9cca1aa3", 00:08:38.533 "is_configured": true, 00:08:38.533 "data_offset": 2048, 00:08:38.533 "data_size": 63488 00:08:38.533 } 00:08:38.533 ] 00:08:38.533 } 00:08:38.533 } 00:08:38.533 }' 00:08:38.533 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:38.533 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:38.533 BaseBdev2 00:08:38.533 BaseBdev3' 00:08:38.533 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.533 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:38.533 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:38.533 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:38.533 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.533 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.533 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.533 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.533 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:38.533 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:38.533 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:38.533 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:38.533 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.533 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.533 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.533 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.792 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:38.792 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:38.792 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:38.792 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.792 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:38.792 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.792 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.792 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.792 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:38.792 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:38.792 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:38.792 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.792 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.792 [2024-10-15 09:07:56.490662] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:38.792 [2024-10-15 09:07:56.490709] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:38.792 [2024-10-15 09:07:56.490821] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:38.792 [2024-10-15 09:07:56.490881] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:38.792 [2024-10-15 09:07:56.490895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:38.792 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.792 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64502 00:08:38.792 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 64502 ']' 00:08:38.792 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 64502 00:08:38.792 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:38.793 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:38.793 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64502 00:08:38.793 killing process with pid 64502 00:08:38.793 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:38.793 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:38.793 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64502' 00:08:38.793 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 64502 00:08:38.793 [2024-10-15 09:07:56.535647] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:38.793 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 64502 00:08:39.051 [2024-10-15 09:07:56.869924] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:40.430 09:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:40.430 00:08:40.430 real 0m11.131s 00:08:40.430 user 0m17.647s 00:08:40.430 sys 0m1.929s 00:08:40.430 09:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:40.430 09:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.430 ************************************ 00:08:40.430 END TEST raid_state_function_test_sb 00:08:40.430 ************************************ 00:08:40.430 09:07:58 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:40.430 09:07:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:40.430 09:07:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:40.430 09:07:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:40.430 ************************************ 00:08:40.430 START TEST raid_superblock_test 00:08:40.430 ************************************ 00:08:40.430 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:08:40.430 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:40.430 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:40.430 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:40.430 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:40.430 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:40.430 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:40.430 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:40.430 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:40.430 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:40.430 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:40.430 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:40.430 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:40.430 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:40.430 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:40.430 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:40.430 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:40.430 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65133 00:08:40.430 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:40.430 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65133 00:08:40.430 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 65133 ']' 00:08:40.430 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.430 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:40.430 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.430 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:40.430 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.430 [2024-10-15 09:07:58.254699] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:08:40.430 [2024-10-15 09:07:58.254959] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65133 ] 00:08:40.689 [2024-10-15 09:07:58.423627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.689 [2024-10-15 09:07:58.554201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.949 [2024-10-15 09:07:58.777168] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:40.949 [2024-10-15 09:07:58.777242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:41.519 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:41.519 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:41.519 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:41.519 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:41.519 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:41.519 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:41.519 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:41.519 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:41.519 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:41.519 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:41.519 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:41.519 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.519 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.519 malloc1 00:08:41.519 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.519 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:41.519 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.519 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.519 [2024-10-15 09:07:59.216067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:41.519 [2024-10-15 09:07:59.216249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.519 [2024-10-15 09:07:59.216325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:41.519 [2024-10-15 09:07:59.216371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.519 [2024-10-15 09:07:59.218936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.519 [2024-10-15 09:07:59.219027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:41.519 pt1 00:08:41.519 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.519 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:41.519 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:41.519 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:41.519 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:41.519 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:41.519 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:41.519 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:41.519 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:41.519 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:41.519 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.519 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.519 malloc2 00:08:41.519 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.519 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:41.519 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.519 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.519 [2024-10-15 09:07:59.282244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:41.519 [2024-10-15 09:07:59.282350] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.519 [2024-10-15 09:07:59.282384] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:41.519 [2024-10-15 09:07:59.282395] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.519 [2024-10-15 09:07:59.284973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.520 [2024-10-15 09:07:59.285097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:41.520 pt2 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.520 malloc3 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.520 [2024-10-15 09:07:59.383238] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:41.520 [2024-10-15 09:07:59.383408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.520 [2024-10-15 09:07:59.383461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:41.520 [2024-10-15 09:07:59.383505] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.520 [2024-10-15 09:07:59.386034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.520 [2024-10-15 09:07:59.386133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:41.520 pt3 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.520 [2024-10-15 09:07:59.395310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:41.520 [2024-10-15 09:07:59.397499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:41.520 [2024-10-15 09:07:59.397642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:41.520 [2024-10-15 09:07:59.397913] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:41.520 [2024-10-15 09:07:59.397974] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:41.520 [2024-10-15 09:07:59.398333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:41.520 [2024-10-15 09:07:59.398586] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:41.520 [2024-10-15 09:07:59.398648] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:41.520 [2024-10-15 09:07:59.398917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.520 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.779 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.779 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.779 "name": "raid_bdev1", 00:08:41.779 "uuid": "393f8e72-4d6b-4a26-b6c7-40be4c85bc08", 00:08:41.779 "strip_size_kb": 64, 00:08:41.779 "state": "online", 00:08:41.779 "raid_level": "raid0", 00:08:41.779 "superblock": true, 00:08:41.779 "num_base_bdevs": 3, 00:08:41.779 "num_base_bdevs_discovered": 3, 00:08:41.779 "num_base_bdevs_operational": 3, 00:08:41.779 "base_bdevs_list": [ 00:08:41.779 { 00:08:41.779 "name": "pt1", 00:08:41.779 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:41.779 "is_configured": true, 00:08:41.779 "data_offset": 2048, 00:08:41.779 "data_size": 63488 00:08:41.779 }, 00:08:41.779 { 00:08:41.779 "name": "pt2", 00:08:41.779 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:41.779 "is_configured": true, 00:08:41.779 "data_offset": 2048, 00:08:41.779 "data_size": 63488 00:08:41.779 }, 00:08:41.779 { 00:08:41.779 "name": "pt3", 00:08:41.779 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:41.779 "is_configured": true, 00:08:41.779 "data_offset": 2048, 00:08:41.779 "data_size": 63488 00:08:41.779 } 00:08:41.779 ] 00:08:41.779 }' 00:08:41.779 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.779 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.038 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:42.038 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:42.038 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:42.038 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:42.038 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:42.038 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:42.038 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:42.038 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:42.038 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.038 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.038 [2024-10-15 09:07:59.878847] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:42.038 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.038 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:42.038 "name": "raid_bdev1", 00:08:42.038 "aliases": [ 00:08:42.038 "393f8e72-4d6b-4a26-b6c7-40be4c85bc08" 00:08:42.038 ], 00:08:42.038 "product_name": "Raid Volume", 00:08:42.038 "block_size": 512, 00:08:42.038 "num_blocks": 190464, 00:08:42.038 "uuid": "393f8e72-4d6b-4a26-b6c7-40be4c85bc08", 00:08:42.038 "assigned_rate_limits": { 00:08:42.038 "rw_ios_per_sec": 0, 00:08:42.038 "rw_mbytes_per_sec": 0, 00:08:42.038 "r_mbytes_per_sec": 0, 00:08:42.038 "w_mbytes_per_sec": 0 00:08:42.038 }, 00:08:42.038 "claimed": false, 00:08:42.038 "zoned": false, 00:08:42.039 "supported_io_types": { 00:08:42.039 "read": true, 00:08:42.039 "write": true, 00:08:42.039 "unmap": true, 00:08:42.039 "flush": true, 00:08:42.039 "reset": true, 00:08:42.039 "nvme_admin": false, 00:08:42.039 "nvme_io": false, 00:08:42.039 "nvme_io_md": false, 00:08:42.039 "write_zeroes": true, 00:08:42.039 "zcopy": false, 00:08:42.039 "get_zone_info": false, 00:08:42.039 "zone_management": false, 00:08:42.039 "zone_append": false, 00:08:42.039 "compare": false, 00:08:42.039 "compare_and_write": false, 00:08:42.039 "abort": false, 00:08:42.039 "seek_hole": false, 00:08:42.039 "seek_data": false, 00:08:42.039 "copy": false, 00:08:42.039 "nvme_iov_md": false 00:08:42.039 }, 00:08:42.039 "memory_domains": [ 00:08:42.039 { 00:08:42.039 "dma_device_id": "system", 00:08:42.039 "dma_device_type": 1 00:08:42.039 }, 00:08:42.039 { 00:08:42.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.039 "dma_device_type": 2 00:08:42.039 }, 00:08:42.039 { 00:08:42.039 "dma_device_id": "system", 00:08:42.039 "dma_device_type": 1 00:08:42.039 }, 00:08:42.039 { 00:08:42.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.039 "dma_device_type": 2 00:08:42.039 }, 00:08:42.039 { 00:08:42.039 "dma_device_id": "system", 00:08:42.039 "dma_device_type": 1 00:08:42.039 }, 00:08:42.039 { 00:08:42.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.039 "dma_device_type": 2 00:08:42.039 } 00:08:42.039 ], 00:08:42.039 "driver_specific": { 00:08:42.039 "raid": { 00:08:42.039 "uuid": "393f8e72-4d6b-4a26-b6c7-40be4c85bc08", 00:08:42.039 "strip_size_kb": 64, 00:08:42.039 "state": "online", 00:08:42.039 "raid_level": "raid0", 00:08:42.039 "superblock": true, 00:08:42.039 "num_base_bdevs": 3, 00:08:42.039 "num_base_bdevs_discovered": 3, 00:08:42.039 "num_base_bdevs_operational": 3, 00:08:42.039 "base_bdevs_list": [ 00:08:42.039 { 00:08:42.039 "name": "pt1", 00:08:42.039 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:42.039 "is_configured": true, 00:08:42.039 "data_offset": 2048, 00:08:42.039 "data_size": 63488 00:08:42.039 }, 00:08:42.039 { 00:08:42.039 "name": "pt2", 00:08:42.039 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:42.039 "is_configured": true, 00:08:42.039 "data_offset": 2048, 00:08:42.039 "data_size": 63488 00:08:42.039 }, 00:08:42.039 { 00:08:42.039 "name": "pt3", 00:08:42.039 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:42.039 "is_configured": true, 00:08:42.039 "data_offset": 2048, 00:08:42.039 "data_size": 63488 00:08:42.039 } 00:08:42.039 ] 00:08:42.039 } 00:08:42.039 } 00:08:42.039 }' 00:08:42.039 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:42.298 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:42.298 pt2 00:08:42.298 pt3' 00:08:42.298 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.298 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:42.298 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.298 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:42.298 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.298 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.298 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.298 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.298 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.298 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.298 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.298 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.298 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:42.298 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.298 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.298 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.298 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.298 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.298 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.298 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:42.298 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.298 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.298 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.298 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.298 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.298 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.298 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:42.298 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:42.298 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.298 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.298 [2024-10-15 09:08:00.178293] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=393f8e72-4d6b-4a26-b6c7-40be4c85bc08 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 393f8e72-4d6b-4a26-b6c7-40be4c85bc08 ']' 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.557 [2024-10-15 09:08:00.209881] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:42.557 [2024-10-15 09:08:00.209977] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:42.557 [2024-10-15 09:08:00.210112] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:42.557 [2024-10-15 09:08:00.210248] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:42.557 [2024-10-15 09:08:00.210308] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.557 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:42.558 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.558 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:42.558 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.558 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.558 [2024-10-15 09:08:00.365721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:42.558 [2024-10-15 09:08:00.367920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:42.558 [2024-10-15 09:08:00.368034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:42.558 [2024-10-15 09:08:00.368133] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:42.558 [2024-10-15 09:08:00.368256] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:42.558 [2024-10-15 09:08:00.368320] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:42.558 [2024-10-15 09:08:00.368402] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:42.558 [2024-10-15 09:08:00.368436] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:42.558 request: 00:08:42.558 { 00:08:42.558 "name": "raid_bdev1", 00:08:42.558 "raid_level": "raid0", 00:08:42.558 "base_bdevs": [ 00:08:42.558 "malloc1", 00:08:42.558 "malloc2", 00:08:42.558 "malloc3" 00:08:42.558 ], 00:08:42.558 "strip_size_kb": 64, 00:08:42.558 "superblock": false, 00:08:42.558 "method": "bdev_raid_create", 00:08:42.558 "req_id": 1 00:08:42.558 } 00:08:42.558 Got JSON-RPC error response 00:08:42.558 response: 00:08:42.558 { 00:08:42.558 "code": -17, 00:08:42.558 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:42.558 } 00:08:42.558 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:42.558 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:42.558 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:42.558 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:42.558 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:42.558 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.558 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.558 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.558 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:42.558 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.558 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:42.558 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:42.558 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:42.558 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.558 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.558 [2024-10-15 09:08:00.433531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:42.558 [2024-10-15 09:08:00.433675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.558 [2024-10-15 09:08:00.433717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:42.558 [2024-10-15 09:08:00.433729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.558 [2024-10-15 09:08:00.436364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.558 [2024-10-15 09:08:00.436412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:42.558 [2024-10-15 09:08:00.436549] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:42.558 [2024-10-15 09:08:00.436608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:42.558 pt1 00:08:42.558 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.558 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:42.558 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:42.558 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.558 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.558 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.558 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.558 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.558 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.558 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.558 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.558 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.558 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.558 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.558 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.817 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.817 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.817 "name": "raid_bdev1", 00:08:42.817 "uuid": "393f8e72-4d6b-4a26-b6c7-40be4c85bc08", 00:08:42.817 "strip_size_kb": 64, 00:08:42.817 "state": "configuring", 00:08:42.817 "raid_level": "raid0", 00:08:42.817 "superblock": true, 00:08:42.817 "num_base_bdevs": 3, 00:08:42.817 "num_base_bdevs_discovered": 1, 00:08:42.817 "num_base_bdevs_operational": 3, 00:08:42.817 "base_bdevs_list": [ 00:08:42.817 { 00:08:42.817 "name": "pt1", 00:08:42.817 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:42.817 "is_configured": true, 00:08:42.817 "data_offset": 2048, 00:08:42.817 "data_size": 63488 00:08:42.817 }, 00:08:42.817 { 00:08:42.817 "name": null, 00:08:42.817 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:42.817 "is_configured": false, 00:08:42.817 "data_offset": 2048, 00:08:42.817 "data_size": 63488 00:08:42.817 }, 00:08:42.817 { 00:08:42.817 "name": null, 00:08:42.817 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:42.817 "is_configured": false, 00:08:42.817 "data_offset": 2048, 00:08:42.817 "data_size": 63488 00:08:42.817 } 00:08:42.817 ] 00:08:42.817 }' 00:08:42.817 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.817 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.078 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:43.078 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:43.078 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.078 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.078 [2024-10-15 09:08:00.924918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:43.078 [2024-10-15 09:08:00.925116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.078 [2024-10-15 09:08:00.925152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:43.078 [2024-10-15 09:08:00.925163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.078 [2024-10-15 09:08:00.925735] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.078 [2024-10-15 09:08:00.925765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:43.078 [2024-10-15 09:08:00.925879] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:43.078 [2024-10-15 09:08:00.925915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:43.078 pt2 00:08:43.078 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.078 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:43.078 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.078 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.078 [2024-10-15 09:08:00.936934] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:43.078 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.078 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:43.078 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:43.078 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.078 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.078 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.078 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.078 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.078 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.078 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.078 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.078 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.078 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.078 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.078 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.078 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.336 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.336 "name": "raid_bdev1", 00:08:43.336 "uuid": "393f8e72-4d6b-4a26-b6c7-40be4c85bc08", 00:08:43.336 "strip_size_kb": 64, 00:08:43.336 "state": "configuring", 00:08:43.336 "raid_level": "raid0", 00:08:43.336 "superblock": true, 00:08:43.336 "num_base_bdevs": 3, 00:08:43.336 "num_base_bdevs_discovered": 1, 00:08:43.336 "num_base_bdevs_operational": 3, 00:08:43.336 "base_bdevs_list": [ 00:08:43.336 { 00:08:43.336 "name": "pt1", 00:08:43.336 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:43.336 "is_configured": true, 00:08:43.336 "data_offset": 2048, 00:08:43.336 "data_size": 63488 00:08:43.336 }, 00:08:43.336 { 00:08:43.336 "name": null, 00:08:43.336 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.336 "is_configured": false, 00:08:43.336 "data_offset": 0, 00:08:43.336 "data_size": 63488 00:08:43.336 }, 00:08:43.336 { 00:08:43.336 "name": null, 00:08:43.336 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:43.336 "is_configured": false, 00:08:43.336 "data_offset": 2048, 00:08:43.336 "data_size": 63488 00:08:43.336 } 00:08:43.336 ] 00:08:43.336 }' 00:08:43.336 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.336 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.594 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:43.594 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:43.595 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:43.595 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.595 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.595 [2024-10-15 09:08:01.400114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:43.595 [2024-10-15 09:08:01.400318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.595 [2024-10-15 09:08:01.400389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:43.595 [2024-10-15 09:08:01.400444] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.595 [2024-10-15 09:08:01.401255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.595 [2024-10-15 09:08:01.401365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:43.595 [2024-10-15 09:08:01.401545] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:43.595 [2024-10-15 09:08:01.401627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:43.595 pt2 00:08:43.595 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.595 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:43.595 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:43.595 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:43.595 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.595 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.595 [2024-10-15 09:08:01.412083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:43.595 [2024-10-15 09:08:01.412161] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.595 [2024-10-15 09:08:01.412184] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:43.595 [2024-10-15 09:08:01.412197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.595 [2024-10-15 09:08:01.412719] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.595 [2024-10-15 09:08:01.412758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:43.595 [2024-10-15 09:08:01.412861] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:43.595 [2024-10-15 09:08:01.412892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:43.595 [2024-10-15 09:08:01.413044] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:43.595 [2024-10-15 09:08:01.413063] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:43.595 [2024-10-15 09:08:01.413413] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:43.595 [2024-10-15 09:08:01.413588] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:43.595 [2024-10-15 09:08:01.413598] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:43.595 [2024-10-15 09:08:01.413800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.595 pt3 00:08:43.595 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.595 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:43.595 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:43.595 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:43.595 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:43.595 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:43.595 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.595 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.595 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.595 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.595 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.595 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.595 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.595 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.595 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.595 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.595 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.595 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.595 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.595 "name": "raid_bdev1", 00:08:43.595 "uuid": "393f8e72-4d6b-4a26-b6c7-40be4c85bc08", 00:08:43.595 "strip_size_kb": 64, 00:08:43.595 "state": "online", 00:08:43.595 "raid_level": "raid0", 00:08:43.595 "superblock": true, 00:08:43.595 "num_base_bdevs": 3, 00:08:43.595 "num_base_bdevs_discovered": 3, 00:08:43.595 "num_base_bdevs_operational": 3, 00:08:43.595 "base_bdevs_list": [ 00:08:43.595 { 00:08:43.595 "name": "pt1", 00:08:43.595 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:43.595 "is_configured": true, 00:08:43.595 "data_offset": 2048, 00:08:43.595 "data_size": 63488 00:08:43.595 }, 00:08:43.595 { 00:08:43.595 "name": "pt2", 00:08:43.595 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.595 "is_configured": true, 00:08:43.595 "data_offset": 2048, 00:08:43.595 "data_size": 63488 00:08:43.595 }, 00:08:43.595 { 00:08:43.595 "name": "pt3", 00:08:43.595 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:43.595 "is_configured": true, 00:08:43.595 "data_offset": 2048, 00:08:43.595 "data_size": 63488 00:08:43.595 } 00:08:43.595 ] 00:08:43.595 }' 00:08:43.595 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.595 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.163 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:44.163 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:44.163 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:44.163 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:44.163 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:44.163 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:44.163 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:44.163 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:44.163 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.163 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.163 [2024-10-15 09:08:01.895650] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.163 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.163 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:44.163 "name": "raid_bdev1", 00:08:44.163 "aliases": [ 00:08:44.163 "393f8e72-4d6b-4a26-b6c7-40be4c85bc08" 00:08:44.163 ], 00:08:44.163 "product_name": "Raid Volume", 00:08:44.163 "block_size": 512, 00:08:44.163 "num_blocks": 190464, 00:08:44.163 "uuid": "393f8e72-4d6b-4a26-b6c7-40be4c85bc08", 00:08:44.163 "assigned_rate_limits": { 00:08:44.163 "rw_ios_per_sec": 0, 00:08:44.163 "rw_mbytes_per_sec": 0, 00:08:44.163 "r_mbytes_per_sec": 0, 00:08:44.163 "w_mbytes_per_sec": 0 00:08:44.163 }, 00:08:44.163 "claimed": false, 00:08:44.163 "zoned": false, 00:08:44.163 "supported_io_types": { 00:08:44.163 "read": true, 00:08:44.163 "write": true, 00:08:44.163 "unmap": true, 00:08:44.163 "flush": true, 00:08:44.163 "reset": true, 00:08:44.163 "nvme_admin": false, 00:08:44.163 "nvme_io": false, 00:08:44.163 "nvme_io_md": false, 00:08:44.163 "write_zeroes": true, 00:08:44.163 "zcopy": false, 00:08:44.163 "get_zone_info": false, 00:08:44.163 "zone_management": false, 00:08:44.163 "zone_append": false, 00:08:44.163 "compare": false, 00:08:44.163 "compare_and_write": false, 00:08:44.163 "abort": false, 00:08:44.163 "seek_hole": false, 00:08:44.163 "seek_data": false, 00:08:44.163 "copy": false, 00:08:44.163 "nvme_iov_md": false 00:08:44.163 }, 00:08:44.163 "memory_domains": [ 00:08:44.163 { 00:08:44.163 "dma_device_id": "system", 00:08:44.163 "dma_device_type": 1 00:08:44.163 }, 00:08:44.163 { 00:08:44.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.163 "dma_device_type": 2 00:08:44.163 }, 00:08:44.163 { 00:08:44.163 "dma_device_id": "system", 00:08:44.163 "dma_device_type": 1 00:08:44.163 }, 00:08:44.163 { 00:08:44.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.163 "dma_device_type": 2 00:08:44.163 }, 00:08:44.163 { 00:08:44.163 "dma_device_id": "system", 00:08:44.163 "dma_device_type": 1 00:08:44.163 }, 00:08:44.163 { 00:08:44.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.163 "dma_device_type": 2 00:08:44.163 } 00:08:44.163 ], 00:08:44.163 "driver_specific": { 00:08:44.163 "raid": { 00:08:44.163 "uuid": "393f8e72-4d6b-4a26-b6c7-40be4c85bc08", 00:08:44.163 "strip_size_kb": 64, 00:08:44.163 "state": "online", 00:08:44.163 "raid_level": "raid0", 00:08:44.163 "superblock": true, 00:08:44.163 "num_base_bdevs": 3, 00:08:44.163 "num_base_bdevs_discovered": 3, 00:08:44.163 "num_base_bdevs_operational": 3, 00:08:44.163 "base_bdevs_list": [ 00:08:44.163 { 00:08:44.163 "name": "pt1", 00:08:44.163 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:44.163 "is_configured": true, 00:08:44.163 "data_offset": 2048, 00:08:44.163 "data_size": 63488 00:08:44.163 }, 00:08:44.163 { 00:08:44.163 "name": "pt2", 00:08:44.163 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.163 "is_configured": true, 00:08:44.163 "data_offset": 2048, 00:08:44.163 "data_size": 63488 00:08:44.163 }, 00:08:44.163 { 00:08:44.163 "name": "pt3", 00:08:44.163 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:44.163 "is_configured": true, 00:08:44.163 "data_offset": 2048, 00:08:44.163 "data_size": 63488 00:08:44.163 } 00:08:44.163 ] 00:08:44.163 } 00:08:44.163 } 00:08:44.163 }' 00:08:44.163 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:44.164 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:44.164 pt2 00:08:44.164 pt3' 00:08:44.164 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.164 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:44.164 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.164 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:44.164 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.164 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.164 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:44.423 [2024-10-15 09:08:02.191098] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 393f8e72-4d6b-4a26-b6c7-40be4c85bc08 '!=' 393f8e72-4d6b-4a26-b6c7-40be4c85bc08 ']' 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65133 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 65133 ']' 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 65133 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65133 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65133' 00:08:44.423 killing process with pid 65133 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 65133 00:08:44.423 [2024-10-15 09:08:02.275662] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:44.423 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 65133 00:08:44.423 [2024-10-15 09:08:02.275902] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:44.423 [2024-10-15 09:08:02.275979] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:44.423 [2024-10-15 09:08:02.275993] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:45.054 [2024-10-15 09:08:02.626886] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:45.990 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:45.990 00:08:45.990 real 0m5.679s 00:08:45.990 user 0m8.154s 00:08:45.990 sys 0m0.938s 00:08:45.990 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:45.990 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.990 ************************************ 00:08:45.990 END TEST raid_superblock_test 00:08:45.990 ************************************ 00:08:46.249 09:08:03 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:46.249 09:08:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:46.249 09:08:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:46.249 09:08:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:46.249 ************************************ 00:08:46.249 START TEST raid_read_error_test 00:08:46.249 ************************************ 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6FnWxmM32j 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65386 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65386 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 65386 ']' 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:46.249 09:08:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.249 [2024-10-15 09:08:04.027675] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:08:46.249 [2024-10-15 09:08:04.028040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65386 ] 00:08:46.508 [2024-10-15 09:08:04.216884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.508 [2024-10-15 09:08:04.354611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.766 [2024-10-15 09:08:04.592953] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.766 [2024-10-15 09:08:04.593012] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.334 09:08:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:47.334 09:08:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:47.334 09:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:47.334 09:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:47.334 09:08:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.334 09:08:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.334 BaseBdev1_malloc 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.334 true 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.334 [2024-10-15 09:08:05.028613] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:47.334 [2024-10-15 09:08:05.028709] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.334 [2024-10-15 09:08:05.028738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:47.334 [2024-10-15 09:08:05.028766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.334 [2024-10-15 09:08:05.031357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.334 [2024-10-15 09:08:05.031411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:47.334 BaseBdev1 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.334 BaseBdev2_malloc 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.334 true 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.334 [2024-10-15 09:08:05.100713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:47.334 [2024-10-15 09:08:05.100805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.334 [2024-10-15 09:08:05.100829] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:47.334 [2024-10-15 09:08:05.100844] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.334 [2024-10-15 09:08:05.103450] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.334 [2024-10-15 09:08:05.103593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:47.334 BaseBdev2 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.334 BaseBdev3_malloc 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.334 true 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.334 [2024-10-15 09:08:05.185426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:47.334 [2024-10-15 09:08:05.185512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.334 [2024-10-15 09:08:05.185540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:47.334 [2024-10-15 09:08:05.185554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.334 [2024-10-15 09:08:05.188126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.334 [2024-10-15 09:08:05.188177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:47.334 BaseBdev3 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.334 [2024-10-15 09:08:05.197495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:47.334 [2024-10-15 09:08:05.199669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:47.334 [2024-10-15 09:08:05.199786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:47.334 [2024-10-15 09:08:05.200031] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:47.334 [2024-10-15 09:08:05.200048] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:47.334 [2024-10-15 09:08:05.200377] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:47.334 [2024-10-15 09:08:05.200566] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:47.334 [2024-10-15 09:08:05.200580] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:47.334 [2024-10-15 09:08:05.200794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.334 09:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.592 09:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.592 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.592 "name": "raid_bdev1", 00:08:47.592 "uuid": "d3b0feb6-25da-4ef4-ac61-f0ae5d28e9c6", 00:08:47.592 "strip_size_kb": 64, 00:08:47.592 "state": "online", 00:08:47.592 "raid_level": "raid0", 00:08:47.592 "superblock": true, 00:08:47.592 "num_base_bdevs": 3, 00:08:47.592 "num_base_bdevs_discovered": 3, 00:08:47.592 "num_base_bdevs_operational": 3, 00:08:47.592 "base_bdevs_list": [ 00:08:47.592 { 00:08:47.592 "name": "BaseBdev1", 00:08:47.592 "uuid": "df0d6882-601c-5de9-9366-edef9544b539", 00:08:47.592 "is_configured": true, 00:08:47.592 "data_offset": 2048, 00:08:47.592 "data_size": 63488 00:08:47.592 }, 00:08:47.592 { 00:08:47.592 "name": "BaseBdev2", 00:08:47.592 "uuid": "ae1bf5f4-a068-5fd5-8321-5d362620daa7", 00:08:47.592 "is_configured": true, 00:08:47.592 "data_offset": 2048, 00:08:47.592 "data_size": 63488 00:08:47.592 }, 00:08:47.592 { 00:08:47.592 "name": "BaseBdev3", 00:08:47.592 "uuid": "da6a88e7-87d7-5dd5-8138-dcf2d62bf4db", 00:08:47.592 "is_configured": true, 00:08:47.592 "data_offset": 2048, 00:08:47.592 "data_size": 63488 00:08:47.592 } 00:08:47.592 ] 00:08:47.592 }' 00:08:47.592 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.592 09:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.850 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:47.850 09:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:48.108 [2024-10-15 09:08:05.794000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:49.045 09:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:49.045 09:08:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.045 09:08:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.045 09:08:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.045 09:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:49.045 09:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:49.045 09:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:49.045 09:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:49.045 09:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:49.045 09:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.045 09:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.045 09:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.045 09:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.045 09:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.045 09:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.045 09:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.045 09:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.045 09:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:49.045 09:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.045 09:08:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.045 09:08:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.046 09:08:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.046 09:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.046 "name": "raid_bdev1", 00:08:49.046 "uuid": "d3b0feb6-25da-4ef4-ac61-f0ae5d28e9c6", 00:08:49.046 "strip_size_kb": 64, 00:08:49.046 "state": "online", 00:08:49.046 "raid_level": "raid0", 00:08:49.046 "superblock": true, 00:08:49.046 "num_base_bdevs": 3, 00:08:49.046 "num_base_bdevs_discovered": 3, 00:08:49.046 "num_base_bdevs_operational": 3, 00:08:49.046 "base_bdevs_list": [ 00:08:49.046 { 00:08:49.046 "name": "BaseBdev1", 00:08:49.046 "uuid": "df0d6882-601c-5de9-9366-edef9544b539", 00:08:49.046 "is_configured": true, 00:08:49.046 "data_offset": 2048, 00:08:49.046 "data_size": 63488 00:08:49.046 }, 00:08:49.046 { 00:08:49.046 "name": "BaseBdev2", 00:08:49.046 "uuid": "ae1bf5f4-a068-5fd5-8321-5d362620daa7", 00:08:49.046 "is_configured": true, 00:08:49.046 "data_offset": 2048, 00:08:49.046 "data_size": 63488 00:08:49.046 }, 00:08:49.046 { 00:08:49.046 "name": "BaseBdev3", 00:08:49.046 "uuid": "da6a88e7-87d7-5dd5-8138-dcf2d62bf4db", 00:08:49.046 "is_configured": true, 00:08:49.046 "data_offset": 2048, 00:08:49.046 "data_size": 63488 00:08:49.046 } 00:08:49.046 ] 00:08:49.046 }' 00:08:49.046 09:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.046 09:08:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.305 09:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:49.305 09:08:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.305 09:08:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.305 [2024-10-15 09:08:07.163333] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:49.305 [2024-10-15 09:08:07.163458] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:49.305 [2024-10-15 09:08:07.166654] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:49.305 [2024-10-15 09:08:07.166768] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:49.305 [2024-10-15 09:08:07.166832] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:49.305 [2024-10-15 09:08:07.166881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:49.305 { 00:08:49.305 "results": [ 00:08:49.305 { 00:08:49.305 "job": "raid_bdev1", 00:08:49.305 "core_mask": "0x1", 00:08:49.305 "workload": "randrw", 00:08:49.305 "percentage": 50, 00:08:49.305 "status": "finished", 00:08:49.305 "queue_depth": 1, 00:08:49.305 "io_size": 131072, 00:08:49.305 "runtime": 1.369775, 00:08:49.305 "iops": 13600.04380281433, 00:08:49.305 "mibps": 1700.0054753517913, 00:08:49.305 "io_failed": 1, 00:08:49.305 "io_timeout": 0, 00:08:49.305 "avg_latency_us": 102.1649131442689, 00:08:49.305 "min_latency_us": 27.053275109170304, 00:08:49.305 "max_latency_us": 1638.4 00:08:49.305 } 00:08:49.305 ], 00:08:49.305 "core_count": 1 00:08:49.305 } 00:08:49.305 09:08:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.305 09:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65386 00:08:49.305 09:08:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 65386 ']' 00:08:49.305 09:08:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 65386 00:08:49.305 09:08:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:49.305 09:08:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:49.305 09:08:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65386 00:08:49.564 killing process with pid 65386 00:08:49.564 09:08:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:49.564 09:08:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:49.564 09:08:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65386' 00:08:49.564 09:08:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 65386 00:08:49.564 [2024-10-15 09:08:07.201226] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:49.564 09:08:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 65386 00:08:49.822 [2024-10-15 09:08:07.465309] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:51.200 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6FnWxmM32j 00:08:51.200 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:51.200 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:51.200 ************************************ 00:08:51.200 END TEST raid_read_error_test 00:08:51.200 ************************************ 00:08:51.200 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:51.200 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:51.200 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:51.200 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:51.200 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:51.200 00:08:51.200 real 0m4.891s 00:08:51.200 user 0m5.878s 00:08:51.200 sys 0m0.611s 00:08:51.200 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.200 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.200 09:08:08 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:51.200 09:08:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:51.200 09:08:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.200 09:08:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:51.200 ************************************ 00:08:51.200 START TEST raid_write_error_test 00:08:51.200 ************************************ 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pQc6yGUdkP 00:08:51.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65537 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65537 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 65537 ']' 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:51.200 09:08:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.200 [2024-10-15 09:08:08.962269] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:08:51.200 [2024-10-15 09:08:08.962516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65537 ] 00:08:51.459 [2024-10-15 09:08:09.130865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.459 [2024-10-15 09:08:09.258566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.718 [2024-10-15 09:08:09.481729] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.718 [2024-10-15 09:08:09.481788] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.977 09:08:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:51.977 09:08:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:51.977 09:08:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:51.977 09:08:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:51.977 09:08:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.977 09:08:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.238 BaseBdev1_malloc 00:08:52.238 09:08:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.238 09:08:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:52.238 09:08:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.238 09:08:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.238 true 00:08:52.238 09:08:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.238 09:08:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:52.238 09:08:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.238 09:08:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.238 [2024-10-15 09:08:09.914368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:52.238 [2024-10-15 09:08:09.914491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.238 [2024-10-15 09:08:09.914521] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:52.238 [2024-10-15 09:08:09.914535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.238 [2024-10-15 09:08:09.917042] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.238 [2024-10-15 09:08:09.917086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:52.238 BaseBdev1 00:08:52.238 09:08:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.238 09:08:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:52.238 09:08:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:52.238 09:08:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.238 09:08:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.238 BaseBdev2_malloc 00:08:52.238 09:08:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.238 09:08:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:52.238 09:08:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.238 09:08:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.238 true 00:08:52.239 09:08:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.239 09:08:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:52.239 09:08:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.239 09:08:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.239 [2024-10-15 09:08:09.981967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:52.239 [2024-10-15 09:08:09.982050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.239 [2024-10-15 09:08:09.982074] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:52.239 [2024-10-15 09:08:09.982088] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.239 [2024-10-15 09:08:09.984512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.239 [2024-10-15 09:08:09.984562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:52.239 BaseBdev2 00:08:52.239 09:08:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.239 09:08:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:52.239 09:08:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:52.239 09:08:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.239 09:08:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.239 BaseBdev3_malloc 00:08:52.239 09:08:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.239 09:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:52.239 09:08:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.239 09:08:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.239 true 00:08:52.239 09:08:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.239 09:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:52.239 09:08:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.239 09:08:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.239 [2024-10-15 09:08:10.065902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:52.239 [2024-10-15 09:08:10.065965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.239 [2024-10-15 09:08:10.065986] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:52.239 [2024-10-15 09:08:10.065997] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.239 [2024-10-15 09:08:10.068180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.239 [2024-10-15 09:08:10.068219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:52.239 BaseBdev3 00:08:52.239 09:08:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.239 09:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:52.239 09:08:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.239 09:08:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.239 [2024-10-15 09:08:10.077960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:52.239 [2024-10-15 09:08:10.080046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:52.239 [2024-10-15 09:08:10.080139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:52.239 [2024-10-15 09:08:10.080356] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:52.239 [2024-10-15 09:08:10.080372] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:52.239 [2024-10-15 09:08:10.080658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:52.239 [2024-10-15 09:08:10.080835] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:52.239 [2024-10-15 09:08:10.080849] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:52.239 [2024-10-15 09:08:10.081025] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:52.239 09:08:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.239 09:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:52.239 09:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:52.239 09:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:52.239 09:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:52.239 09:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.239 09:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.239 09:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.239 09:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.239 09:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.239 09:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.239 09:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:52.239 09:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.239 09:08:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.239 09:08:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.239 09:08:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.239 09:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.239 "name": "raid_bdev1", 00:08:52.239 "uuid": "0fd8391a-1963-4e00-82a6-7cd26318f659", 00:08:52.239 "strip_size_kb": 64, 00:08:52.239 "state": "online", 00:08:52.239 "raid_level": "raid0", 00:08:52.239 "superblock": true, 00:08:52.239 "num_base_bdevs": 3, 00:08:52.239 "num_base_bdevs_discovered": 3, 00:08:52.239 "num_base_bdevs_operational": 3, 00:08:52.239 "base_bdevs_list": [ 00:08:52.239 { 00:08:52.239 "name": "BaseBdev1", 00:08:52.239 "uuid": "53512ce9-c132-5389-a490-2f5cac3073db", 00:08:52.239 "is_configured": true, 00:08:52.239 "data_offset": 2048, 00:08:52.239 "data_size": 63488 00:08:52.239 }, 00:08:52.239 { 00:08:52.239 "name": "BaseBdev2", 00:08:52.239 "uuid": "fc271175-7226-5a22-b6db-95332c367077", 00:08:52.239 "is_configured": true, 00:08:52.239 "data_offset": 2048, 00:08:52.239 "data_size": 63488 00:08:52.239 }, 00:08:52.239 { 00:08:52.239 "name": "BaseBdev3", 00:08:52.239 "uuid": "8d6c53a8-5187-584a-b923-23a9e92df5ee", 00:08:52.239 "is_configured": true, 00:08:52.239 "data_offset": 2048, 00:08:52.239 "data_size": 63488 00:08:52.239 } 00:08:52.239 ] 00:08:52.239 }' 00:08:52.239 09:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.239 09:08:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.808 09:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:52.808 09:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:52.808 [2024-10-15 09:08:10.662523] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:53.744 09:08:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:53.744 09:08:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.744 09:08:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.744 09:08:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.744 09:08:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:53.744 09:08:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:53.744 09:08:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:53.744 09:08:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:53.744 09:08:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:53.744 09:08:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.744 09:08:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.744 09:08:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.744 09:08:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.744 09:08:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.744 09:08:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.744 09:08:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.744 09:08:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.744 09:08:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.744 09:08:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:53.744 09:08:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.744 09:08:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.744 09:08:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.744 09:08:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.744 "name": "raid_bdev1", 00:08:53.744 "uuid": "0fd8391a-1963-4e00-82a6-7cd26318f659", 00:08:53.744 "strip_size_kb": 64, 00:08:53.744 "state": "online", 00:08:53.745 "raid_level": "raid0", 00:08:53.745 "superblock": true, 00:08:53.745 "num_base_bdevs": 3, 00:08:53.745 "num_base_bdevs_discovered": 3, 00:08:53.745 "num_base_bdevs_operational": 3, 00:08:53.745 "base_bdevs_list": [ 00:08:53.745 { 00:08:53.745 "name": "BaseBdev1", 00:08:53.745 "uuid": "53512ce9-c132-5389-a490-2f5cac3073db", 00:08:53.745 "is_configured": true, 00:08:53.745 "data_offset": 2048, 00:08:53.745 "data_size": 63488 00:08:53.745 }, 00:08:53.745 { 00:08:53.745 "name": "BaseBdev2", 00:08:53.745 "uuid": "fc271175-7226-5a22-b6db-95332c367077", 00:08:53.745 "is_configured": true, 00:08:53.745 "data_offset": 2048, 00:08:53.745 "data_size": 63488 00:08:53.745 }, 00:08:53.745 { 00:08:53.745 "name": "BaseBdev3", 00:08:53.745 "uuid": "8d6c53a8-5187-584a-b923-23a9e92df5ee", 00:08:53.745 "is_configured": true, 00:08:53.745 "data_offset": 2048, 00:08:53.745 "data_size": 63488 00:08:53.745 } 00:08:53.745 ] 00:08:53.745 }' 00:08:53.745 09:08:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.745 09:08:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.333 09:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:54.333 09:08:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.333 09:08:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.333 [2024-10-15 09:08:12.043201] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:54.333 [2024-10-15 09:08:12.043296] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:54.333 [2024-10-15 09:08:12.046397] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:54.333 [2024-10-15 09:08:12.046491] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.333 [2024-10-15 09:08:12.046555] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:54.333 [2024-10-15 09:08:12.046605] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:54.333 { 00:08:54.333 "results": [ 00:08:54.333 { 00:08:54.333 "job": "raid_bdev1", 00:08:54.333 "core_mask": "0x1", 00:08:54.333 "workload": "randrw", 00:08:54.333 "percentage": 50, 00:08:54.333 "status": "finished", 00:08:54.333 "queue_depth": 1, 00:08:54.333 "io_size": 131072, 00:08:54.333 "runtime": 1.381399, 00:08:54.333 "iops": 13906.192200805126, 00:08:54.333 "mibps": 1738.2740251006408, 00:08:54.333 "io_failed": 1, 00:08:54.333 "io_timeout": 0, 00:08:54.333 "avg_latency_us": 99.75258807101734, 00:08:54.333 "min_latency_us": 28.28296943231441, 00:08:54.333 "max_latency_us": 1695.6366812227075 00:08:54.333 } 00:08:54.333 ], 00:08:54.333 "core_count": 1 00:08:54.333 } 00:08:54.333 09:08:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.333 09:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65537 00:08:54.333 09:08:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 65537 ']' 00:08:54.333 09:08:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 65537 00:08:54.334 09:08:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:54.334 09:08:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:54.334 09:08:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65537 00:08:54.334 killing process with pid 65537 00:08:54.334 09:08:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:54.334 09:08:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:54.334 09:08:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65537' 00:08:54.334 09:08:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 65537 00:08:54.334 09:08:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 65537 00:08:54.334 [2024-10-15 09:08:12.086341] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:54.593 [2024-10-15 09:08:12.350103] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:55.971 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:55.971 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pQc6yGUdkP 00:08:55.971 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:55.971 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:55.971 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:55.971 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:55.971 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:55.971 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:55.971 00:08:55.971 real 0m4.848s 00:08:55.971 user 0m5.799s 00:08:55.971 sys 0m0.594s 00:08:55.971 ************************************ 00:08:55.971 END TEST raid_write_error_test 00:08:55.971 ************************************ 00:08:55.971 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.971 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.971 09:08:13 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:55.971 09:08:13 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:55.971 09:08:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:55.971 09:08:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.971 09:08:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:55.971 ************************************ 00:08:55.971 START TEST raid_state_function_test 00:08:55.971 ************************************ 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:55.971 Process raid pid: 65681 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65681 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65681' 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65681 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 65681 ']' 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:55.971 09:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.231 [2024-10-15 09:08:13.874931] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:08:56.231 [2024-10-15 09:08:13.875178] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.231 [2024-10-15 09:08:14.046162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.489 [2024-10-15 09:08:14.179359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.748 [2024-10-15 09:08:14.427510] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.748 [2024-10-15 09:08:14.427732] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.007 09:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:57.007 09:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:57.007 09:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:57.007 09:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.007 09:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.007 [2024-10-15 09:08:14.771036] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:57.007 [2024-10-15 09:08:14.771197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:57.007 [2024-10-15 09:08:14.771216] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:57.007 [2024-10-15 09:08:14.771229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:57.007 [2024-10-15 09:08:14.771237] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:57.007 [2024-10-15 09:08:14.771247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:57.007 09:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.007 09:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:57.007 09:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.007 09:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.007 09:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.007 09:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.007 09:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.007 09:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.007 09:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.007 09:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.007 09:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.007 09:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.007 09:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.007 09:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.007 09:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.007 09:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.007 09:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.007 "name": "Existed_Raid", 00:08:57.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.007 "strip_size_kb": 64, 00:08:57.007 "state": "configuring", 00:08:57.007 "raid_level": "concat", 00:08:57.007 "superblock": false, 00:08:57.007 "num_base_bdevs": 3, 00:08:57.007 "num_base_bdevs_discovered": 0, 00:08:57.007 "num_base_bdevs_operational": 3, 00:08:57.007 "base_bdevs_list": [ 00:08:57.007 { 00:08:57.007 "name": "BaseBdev1", 00:08:57.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.007 "is_configured": false, 00:08:57.007 "data_offset": 0, 00:08:57.007 "data_size": 0 00:08:57.007 }, 00:08:57.007 { 00:08:57.007 "name": "BaseBdev2", 00:08:57.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.007 "is_configured": false, 00:08:57.007 "data_offset": 0, 00:08:57.007 "data_size": 0 00:08:57.007 }, 00:08:57.007 { 00:08:57.007 "name": "BaseBdev3", 00:08:57.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.008 "is_configured": false, 00:08:57.008 "data_offset": 0, 00:08:57.008 "data_size": 0 00:08:57.008 } 00:08:57.008 ] 00:08:57.008 }' 00:08:57.008 09:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.008 09:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.655 [2024-10-15 09:08:15.234201] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:57.655 [2024-10-15 09:08:15.234338] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.655 [2024-10-15 09:08:15.246239] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:57.655 [2024-10-15 09:08:15.246371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:57.655 [2024-10-15 09:08:15.246404] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:57.655 [2024-10-15 09:08:15.246428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:57.655 [2024-10-15 09:08:15.246447] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:57.655 [2024-10-15 09:08:15.246468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.655 [2024-10-15 09:08:15.300464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:57.655 BaseBdev1 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.655 [ 00:08:57.655 { 00:08:57.655 "name": "BaseBdev1", 00:08:57.655 "aliases": [ 00:08:57.655 "a9a9279c-d86d-4929-8fdd-74e4b65b2379" 00:08:57.655 ], 00:08:57.655 "product_name": "Malloc disk", 00:08:57.655 "block_size": 512, 00:08:57.655 "num_blocks": 65536, 00:08:57.655 "uuid": "a9a9279c-d86d-4929-8fdd-74e4b65b2379", 00:08:57.655 "assigned_rate_limits": { 00:08:57.655 "rw_ios_per_sec": 0, 00:08:57.655 "rw_mbytes_per_sec": 0, 00:08:57.655 "r_mbytes_per_sec": 0, 00:08:57.655 "w_mbytes_per_sec": 0 00:08:57.655 }, 00:08:57.655 "claimed": true, 00:08:57.655 "claim_type": "exclusive_write", 00:08:57.655 "zoned": false, 00:08:57.655 "supported_io_types": { 00:08:57.655 "read": true, 00:08:57.655 "write": true, 00:08:57.655 "unmap": true, 00:08:57.655 "flush": true, 00:08:57.655 "reset": true, 00:08:57.655 "nvme_admin": false, 00:08:57.655 "nvme_io": false, 00:08:57.655 "nvme_io_md": false, 00:08:57.655 "write_zeroes": true, 00:08:57.655 "zcopy": true, 00:08:57.655 "get_zone_info": false, 00:08:57.655 "zone_management": false, 00:08:57.655 "zone_append": false, 00:08:57.655 "compare": false, 00:08:57.655 "compare_and_write": false, 00:08:57.655 "abort": true, 00:08:57.655 "seek_hole": false, 00:08:57.655 "seek_data": false, 00:08:57.655 "copy": true, 00:08:57.655 "nvme_iov_md": false 00:08:57.655 }, 00:08:57.655 "memory_domains": [ 00:08:57.655 { 00:08:57.655 "dma_device_id": "system", 00:08:57.655 "dma_device_type": 1 00:08:57.655 }, 00:08:57.655 { 00:08:57.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.655 "dma_device_type": 2 00:08:57.655 } 00:08:57.655 ], 00:08:57.655 "driver_specific": {} 00:08:57.655 } 00:08:57.655 ] 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.655 "name": "Existed_Raid", 00:08:57.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.655 "strip_size_kb": 64, 00:08:57.655 "state": "configuring", 00:08:57.655 "raid_level": "concat", 00:08:57.655 "superblock": false, 00:08:57.655 "num_base_bdevs": 3, 00:08:57.655 "num_base_bdevs_discovered": 1, 00:08:57.655 "num_base_bdevs_operational": 3, 00:08:57.655 "base_bdevs_list": [ 00:08:57.655 { 00:08:57.655 "name": "BaseBdev1", 00:08:57.655 "uuid": "a9a9279c-d86d-4929-8fdd-74e4b65b2379", 00:08:57.655 "is_configured": true, 00:08:57.655 "data_offset": 0, 00:08:57.655 "data_size": 65536 00:08:57.655 }, 00:08:57.655 { 00:08:57.655 "name": "BaseBdev2", 00:08:57.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.655 "is_configured": false, 00:08:57.655 "data_offset": 0, 00:08:57.655 "data_size": 0 00:08:57.655 }, 00:08:57.655 { 00:08:57.655 "name": "BaseBdev3", 00:08:57.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.655 "is_configured": false, 00:08:57.655 "data_offset": 0, 00:08:57.655 "data_size": 0 00:08:57.655 } 00:08:57.655 ] 00:08:57.655 }' 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.655 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.915 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:57.915 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.915 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.174 [2024-10-15 09:08:15.811663] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:58.174 [2024-10-15 09:08:15.811846] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:58.174 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.174 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:58.174 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.174 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.174 [2024-10-15 09:08:15.823774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:58.174 [2024-10-15 09:08:15.826046] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:58.174 [2024-10-15 09:08:15.826189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:58.174 [2024-10-15 09:08:15.826238] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:58.174 [2024-10-15 09:08:15.826266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:58.174 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.174 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:58.174 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:58.174 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:58.174 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.174 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.174 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.174 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.174 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.174 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.174 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.174 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.174 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.174 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.174 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.174 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.174 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.174 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.174 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.174 "name": "Existed_Raid", 00:08:58.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.174 "strip_size_kb": 64, 00:08:58.174 "state": "configuring", 00:08:58.174 "raid_level": "concat", 00:08:58.174 "superblock": false, 00:08:58.174 "num_base_bdevs": 3, 00:08:58.174 "num_base_bdevs_discovered": 1, 00:08:58.174 "num_base_bdevs_operational": 3, 00:08:58.174 "base_bdevs_list": [ 00:08:58.174 { 00:08:58.174 "name": "BaseBdev1", 00:08:58.174 "uuid": "a9a9279c-d86d-4929-8fdd-74e4b65b2379", 00:08:58.174 "is_configured": true, 00:08:58.174 "data_offset": 0, 00:08:58.174 "data_size": 65536 00:08:58.174 }, 00:08:58.174 { 00:08:58.174 "name": "BaseBdev2", 00:08:58.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.174 "is_configured": false, 00:08:58.174 "data_offset": 0, 00:08:58.174 "data_size": 0 00:08:58.174 }, 00:08:58.174 { 00:08:58.174 "name": "BaseBdev3", 00:08:58.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.174 "is_configured": false, 00:08:58.174 "data_offset": 0, 00:08:58.174 "data_size": 0 00:08:58.175 } 00:08:58.175 ] 00:08:58.175 }' 00:08:58.175 09:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.175 09:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.434 09:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:58.434 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.434 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.693 [2024-10-15 09:08:16.349805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:58.693 BaseBdev2 00:08:58.693 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.694 [ 00:08:58.694 { 00:08:58.694 "name": "BaseBdev2", 00:08:58.694 "aliases": [ 00:08:58.694 "5d3d0d0c-d5dd-4dc3-9f05-10bd982df278" 00:08:58.694 ], 00:08:58.694 "product_name": "Malloc disk", 00:08:58.694 "block_size": 512, 00:08:58.694 "num_blocks": 65536, 00:08:58.694 "uuid": "5d3d0d0c-d5dd-4dc3-9f05-10bd982df278", 00:08:58.694 "assigned_rate_limits": { 00:08:58.694 "rw_ios_per_sec": 0, 00:08:58.694 "rw_mbytes_per_sec": 0, 00:08:58.694 "r_mbytes_per_sec": 0, 00:08:58.694 "w_mbytes_per_sec": 0 00:08:58.694 }, 00:08:58.694 "claimed": true, 00:08:58.694 "claim_type": "exclusive_write", 00:08:58.694 "zoned": false, 00:08:58.694 "supported_io_types": { 00:08:58.694 "read": true, 00:08:58.694 "write": true, 00:08:58.694 "unmap": true, 00:08:58.694 "flush": true, 00:08:58.694 "reset": true, 00:08:58.694 "nvme_admin": false, 00:08:58.694 "nvme_io": false, 00:08:58.694 "nvme_io_md": false, 00:08:58.694 "write_zeroes": true, 00:08:58.694 "zcopy": true, 00:08:58.694 "get_zone_info": false, 00:08:58.694 "zone_management": false, 00:08:58.694 "zone_append": false, 00:08:58.694 "compare": false, 00:08:58.694 "compare_and_write": false, 00:08:58.694 "abort": true, 00:08:58.694 "seek_hole": false, 00:08:58.694 "seek_data": false, 00:08:58.694 "copy": true, 00:08:58.694 "nvme_iov_md": false 00:08:58.694 }, 00:08:58.694 "memory_domains": [ 00:08:58.694 { 00:08:58.694 "dma_device_id": "system", 00:08:58.694 "dma_device_type": 1 00:08:58.694 }, 00:08:58.694 { 00:08:58.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.694 "dma_device_type": 2 00:08:58.694 } 00:08:58.694 ], 00:08:58.694 "driver_specific": {} 00:08:58.694 } 00:08:58.694 ] 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.694 "name": "Existed_Raid", 00:08:58.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.694 "strip_size_kb": 64, 00:08:58.694 "state": "configuring", 00:08:58.694 "raid_level": "concat", 00:08:58.694 "superblock": false, 00:08:58.694 "num_base_bdevs": 3, 00:08:58.694 "num_base_bdevs_discovered": 2, 00:08:58.694 "num_base_bdevs_operational": 3, 00:08:58.694 "base_bdevs_list": [ 00:08:58.694 { 00:08:58.694 "name": "BaseBdev1", 00:08:58.694 "uuid": "a9a9279c-d86d-4929-8fdd-74e4b65b2379", 00:08:58.694 "is_configured": true, 00:08:58.694 "data_offset": 0, 00:08:58.694 "data_size": 65536 00:08:58.694 }, 00:08:58.694 { 00:08:58.694 "name": "BaseBdev2", 00:08:58.694 "uuid": "5d3d0d0c-d5dd-4dc3-9f05-10bd982df278", 00:08:58.694 "is_configured": true, 00:08:58.694 "data_offset": 0, 00:08:58.694 "data_size": 65536 00:08:58.694 }, 00:08:58.694 { 00:08:58.694 "name": "BaseBdev3", 00:08:58.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.694 "is_configured": false, 00:08:58.694 "data_offset": 0, 00:08:58.694 "data_size": 0 00:08:58.694 } 00:08:58.694 ] 00:08:58.694 }' 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.694 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.264 09:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:59.264 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.264 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.264 [2024-10-15 09:08:16.937648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:59.264 [2024-10-15 09:08:16.937733] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:59.264 [2024-10-15 09:08:16.937749] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:59.264 [2024-10-15 09:08:16.938066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:59.264 [2024-10-15 09:08:16.938279] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:59.264 [2024-10-15 09:08:16.938288] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:59.264 [2024-10-15 09:08:16.938597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.264 BaseBdev3 00:08:59.264 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.264 09:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:59.264 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:59.264 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:59.264 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:59.264 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:59.264 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:59.264 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:59.264 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.264 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.264 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.264 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:59.264 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.264 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.264 [ 00:08:59.264 { 00:08:59.264 "name": "BaseBdev3", 00:08:59.264 "aliases": [ 00:08:59.264 "c0e94f5b-b97a-4c52-af76-ce5074d74b92" 00:08:59.264 ], 00:08:59.264 "product_name": "Malloc disk", 00:08:59.264 "block_size": 512, 00:08:59.264 "num_blocks": 65536, 00:08:59.264 "uuid": "c0e94f5b-b97a-4c52-af76-ce5074d74b92", 00:08:59.264 "assigned_rate_limits": { 00:08:59.264 "rw_ios_per_sec": 0, 00:08:59.264 "rw_mbytes_per_sec": 0, 00:08:59.264 "r_mbytes_per_sec": 0, 00:08:59.264 "w_mbytes_per_sec": 0 00:08:59.264 }, 00:08:59.264 "claimed": true, 00:08:59.264 "claim_type": "exclusive_write", 00:08:59.264 "zoned": false, 00:08:59.264 "supported_io_types": { 00:08:59.264 "read": true, 00:08:59.264 "write": true, 00:08:59.264 "unmap": true, 00:08:59.264 "flush": true, 00:08:59.264 "reset": true, 00:08:59.264 "nvme_admin": false, 00:08:59.264 "nvme_io": false, 00:08:59.264 "nvme_io_md": false, 00:08:59.264 "write_zeroes": true, 00:08:59.264 "zcopy": true, 00:08:59.264 "get_zone_info": false, 00:08:59.264 "zone_management": false, 00:08:59.264 "zone_append": false, 00:08:59.264 "compare": false, 00:08:59.264 "compare_and_write": false, 00:08:59.264 "abort": true, 00:08:59.264 "seek_hole": false, 00:08:59.264 "seek_data": false, 00:08:59.264 "copy": true, 00:08:59.264 "nvme_iov_md": false 00:08:59.264 }, 00:08:59.264 "memory_domains": [ 00:08:59.264 { 00:08:59.264 "dma_device_id": "system", 00:08:59.264 "dma_device_type": 1 00:08:59.264 }, 00:08:59.264 { 00:08:59.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.264 "dma_device_type": 2 00:08:59.264 } 00:08:59.264 ], 00:08:59.264 "driver_specific": {} 00:08:59.264 } 00:08:59.264 ] 00:08:59.264 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.264 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:59.264 09:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:59.265 09:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:59.265 09:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:59.265 09:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.265 09:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.265 09:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.265 09:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.265 09:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.265 09:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.265 09:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.265 09:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.265 09:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.265 09:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.265 09:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.265 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.265 09:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.265 09:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.265 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.265 "name": "Existed_Raid", 00:08:59.265 "uuid": "27b678d8-6da3-450c-b404-f72747612eca", 00:08:59.265 "strip_size_kb": 64, 00:08:59.265 "state": "online", 00:08:59.265 "raid_level": "concat", 00:08:59.265 "superblock": false, 00:08:59.265 "num_base_bdevs": 3, 00:08:59.265 "num_base_bdevs_discovered": 3, 00:08:59.265 "num_base_bdevs_operational": 3, 00:08:59.265 "base_bdevs_list": [ 00:08:59.265 { 00:08:59.265 "name": "BaseBdev1", 00:08:59.265 "uuid": "a9a9279c-d86d-4929-8fdd-74e4b65b2379", 00:08:59.265 "is_configured": true, 00:08:59.265 "data_offset": 0, 00:08:59.265 "data_size": 65536 00:08:59.265 }, 00:08:59.265 { 00:08:59.265 "name": "BaseBdev2", 00:08:59.265 "uuid": "5d3d0d0c-d5dd-4dc3-9f05-10bd982df278", 00:08:59.265 "is_configured": true, 00:08:59.265 "data_offset": 0, 00:08:59.265 "data_size": 65536 00:08:59.265 }, 00:08:59.265 { 00:08:59.265 "name": "BaseBdev3", 00:08:59.265 "uuid": "c0e94f5b-b97a-4c52-af76-ce5074d74b92", 00:08:59.265 "is_configured": true, 00:08:59.265 "data_offset": 0, 00:08:59.265 "data_size": 65536 00:08:59.265 } 00:08:59.265 ] 00:08:59.265 }' 00:08:59.265 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.265 09:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.525 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:59.525 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:59.525 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:59.525 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:59.525 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:59.525 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:59.525 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:59.525 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:59.525 09:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.525 09:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.525 [2024-10-15 09:08:17.389554] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:59.525 09:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.525 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:59.525 "name": "Existed_Raid", 00:08:59.525 "aliases": [ 00:08:59.525 "27b678d8-6da3-450c-b404-f72747612eca" 00:08:59.525 ], 00:08:59.525 "product_name": "Raid Volume", 00:08:59.525 "block_size": 512, 00:08:59.525 "num_blocks": 196608, 00:08:59.525 "uuid": "27b678d8-6da3-450c-b404-f72747612eca", 00:08:59.525 "assigned_rate_limits": { 00:08:59.525 "rw_ios_per_sec": 0, 00:08:59.525 "rw_mbytes_per_sec": 0, 00:08:59.525 "r_mbytes_per_sec": 0, 00:08:59.525 "w_mbytes_per_sec": 0 00:08:59.525 }, 00:08:59.525 "claimed": false, 00:08:59.525 "zoned": false, 00:08:59.525 "supported_io_types": { 00:08:59.525 "read": true, 00:08:59.525 "write": true, 00:08:59.525 "unmap": true, 00:08:59.525 "flush": true, 00:08:59.525 "reset": true, 00:08:59.525 "nvme_admin": false, 00:08:59.525 "nvme_io": false, 00:08:59.525 "nvme_io_md": false, 00:08:59.525 "write_zeroes": true, 00:08:59.525 "zcopy": false, 00:08:59.525 "get_zone_info": false, 00:08:59.525 "zone_management": false, 00:08:59.525 "zone_append": false, 00:08:59.525 "compare": false, 00:08:59.525 "compare_and_write": false, 00:08:59.525 "abort": false, 00:08:59.525 "seek_hole": false, 00:08:59.525 "seek_data": false, 00:08:59.525 "copy": false, 00:08:59.525 "nvme_iov_md": false 00:08:59.525 }, 00:08:59.525 "memory_domains": [ 00:08:59.525 { 00:08:59.525 "dma_device_id": "system", 00:08:59.525 "dma_device_type": 1 00:08:59.525 }, 00:08:59.525 { 00:08:59.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.525 "dma_device_type": 2 00:08:59.525 }, 00:08:59.525 { 00:08:59.525 "dma_device_id": "system", 00:08:59.525 "dma_device_type": 1 00:08:59.525 }, 00:08:59.525 { 00:08:59.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.525 "dma_device_type": 2 00:08:59.525 }, 00:08:59.525 { 00:08:59.525 "dma_device_id": "system", 00:08:59.525 "dma_device_type": 1 00:08:59.525 }, 00:08:59.525 { 00:08:59.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.525 "dma_device_type": 2 00:08:59.525 } 00:08:59.525 ], 00:08:59.525 "driver_specific": { 00:08:59.525 "raid": { 00:08:59.525 "uuid": "27b678d8-6da3-450c-b404-f72747612eca", 00:08:59.525 "strip_size_kb": 64, 00:08:59.525 "state": "online", 00:08:59.525 "raid_level": "concat", 00:08:59.525 "superblock": false, 00:08:59.525 "num_base_bdevs": 3, 00:08:59.525 "num_base_bdevs_discovered": 3, 00:08:59.525 "num_base_bdevs_operational": 3, 00:08:59.525 "base_bdevs_list": [ 00:08:59.525 { 00:08:59.525 "name": "BaseBdev1", 00:08:59.525 "uuid": "a9a9279c-d86d-4929-8fdd-74e4b65b2379", 00:08:59.526 "is_configured": true, 00:08:59.526 "data_offset": 0, 00:08:59.526 "data_size": 65536 00:08:59.526 }, 00:08:59.526 { 00:08:59.526 "name": "BaseBdev2", 00:08:59.526 "uuid": "5d3d0d0c-d5dd-4dc3-9f05-10bd982df278", 00:08:59.526 "is_configured": true, 00:08:59.526 "data_offset": 0, 00:08:59.526 "data_size": 65536 00:08:59.526 }, 00:08:59.526 { 00:08:59.526 "name": "BaseBdev3", 00:08:59.526 "uuid": "c0e94f5b-b97a-4c52-af76-ce5074d74b92", 00:08:59.526 "is_configured": true, 00:08:59.526 "data_offset": 0, 00:08:59.526 "data_size": 65536 00:08:59.526 } 00:08:59.526 ] 00:08:59.526 } 00:08:59.526 } 00:08:59.526 }' 00:08:59.526 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:59.785 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:59.785 BaseBdev2 00:08:59.785 BaseBdev3' 00:08:59.785 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.785 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:59.785 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.785 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.785 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:59.785 09:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.785 09:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.785 09:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.785 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.785 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.785 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.785 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:59.785 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.785 09:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.785 09:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.785 09:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.785 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.785 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.785 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.785 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:59.785 09:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.785 09:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.785 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.785 09:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.786 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.786 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.786 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:59.786 09:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.786 09:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.786 [2024-10-15 09:08:17.657251] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:59.786 [2024-10-15 09:08:17.657378] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:59.786 [2024-10-15 09:08:17.657458] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:00.045 09:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.045 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:00.045 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:00.045 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:00.045 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:00.045 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:00.045 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:00.045 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.045 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:00.045 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:00.045 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.045 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:00.045 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.045 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.045 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.045 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.045 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.045 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.045 09:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.045 09:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.045 09:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.045 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.045 "name": "Existed_Raid", 00:09:00.045 "uuid": "27b678d8-6da3-450c-b404-f72747612eca", 00:09:00.045 "strip_size_kb": 64, 00:09:00.045 "state": "offline", 00:09:00.045 "raid_level": "concat", 00:09:00.045 "superblock": false, 00:09:00.045 "num_base_bdevs": 3, 00:09:00.045 "num_base_bdevs_discovered": 2, 00:09:00.045 "num_base_bdevs_operational": 2, 00:09:00.045 "base_bdevs_list": [ 00:09:00.045 { 00:09:00.045 "name": null, 00:09:00.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.045 "is_configured": false, 00:09:00.045 "data_offset": 0, 00:09:00.045 "data_size": 65536 00:09:00.045 }, 00:09:00.045 { 00:09:00.045 "name": "BaseBdev2", 00:09:00.045 "uuid": "5d3d0d0c-d5dd-4dc3-9f05-10bd982df278", 00:09:00.045 "is_configured": true, 00:09:00.045 "data_offset": 0, 00:09:00.045 "data_size": 65536 00:09:00.045 }, 00:09:00.045 { 00:09:00.045 "name": "BaseBdev3", 00:09:00.045 "uuid": "c0e94f5b-b97a-4c52-af76-ce5074d74b92", 00:09:00.045 "is_configured": true, 00:09:00.045 "data_offset": 0, 00:09:00.045 "data_size": 65536 00:09:00.045 } 00:09:00.045 ] 00:09:00.045 }' 00:09:00.045 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.045 09:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.613 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:00.613 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:00.613 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:00.613 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.613 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.613 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.613 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.613 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:00.613 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:00.613 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:00.613 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.613 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.613 [2024-10-15 09:08:18.265256] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:00.613 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.613 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:00.613 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:00.613 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.613 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.613 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.613 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:00.613 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.613 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:00.613 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:00.613 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:00.613 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.613 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.613 [2024-10-15 09:08:18.428955] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:00.613 [2024-10-15 09:08:18.429055] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:00.872 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.872 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:00.872 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:00.872 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.872 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:00.872 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.872 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.872 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.872 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:00.872 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:00.872 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:00.872 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:00.872 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:00.872 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:00.872 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.872 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.872 BaseBdev2 00:09:00.872 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.872 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:00.872 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:00.872 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:00.872 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:00.872 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:00.872 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:00.872 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:00.872 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.872 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.872 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.872 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:00.872 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.872 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.872 [ 00:09:00.872 { 00:09:00.872 "name": "BaseBdev2", 00:09:00.872 "aliases": [ 00:09:00.872 "5339c242-9761-4ba5-976c-ce78e1cc2b90" 00:09:00.872 ], 00:09:00.872 "product_name": "Malloc disk", 00:09:00.872 "block_size": 512, 00:09:00.872 "num_blocks": 65536, 00:09:00.872 "uuid": "5339c242-9761-4ba5-976c-ce78e1cc2b90", 00:09:00.872 "assigned_rate_limits": { 00:09:00.872 "rw_ios_per_sec": 0, 00:09:00.872 "rw_mbytes_per_sec": 0, 00:09:00.872 "r_mbytes_per_sec": 0, 00:09:00.872 "w_mbytes_per_sec": 0 00:09:00.872 }, 00:09:00.872 "claimed": false, 00:09:00.872 "zoned": false, 00:09:00.872 "supported_io_types": { 00:09:00.872 "read": true, 00:09:00.872 "write": true, 00:09:00.872 "unmap": true, 00:09:00.872 "flush": true, 00:09:00.872 "reset": true, 00:09:00.872 "nvme_admin": false, 00:09:00.872 "nvme_io": false, 00:09:00.872 "nvme_io_md": false, 00:09:00.872 "write_zeroes": true, 00:09:00.872 "zcopy": true, 00:09:00.872 "get_zone_info": false, 00:09:00.872 "zone_management": false, 00:09:00.872 "zone_append": false, 00:09:00.872 "compare": false, 00:09:00.872 "compare_and_write": false, 00:09:00.872 "abort": true, 00:09:00.872 "seek_hole": false, 00:09:00.872 "seek_data": false, 00:09:00.872 "copy": true, 00:09:00.872 "nvme_iov_md": false 00:09:00.872 }, 00:09:00.872 "memory_domains": [ 00:09:00.872 { 00:09:00.872 "dma_device_id": "system", 00:09:00.872 "dma_device_type": 1 00:09:00.872 }, 00:09:00.872 { 00:09:00.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.872 "dma_device_type": 2 00:09:00.872 } 00:09:00.872 ], 00:09:00.872 "driver_specific": {} 00:09:00.872 } 00:09:00.872 ] 00:09:00.872 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.873 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:00.873 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:00.873 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:00.873 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:00.873 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.873 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.873 BaseBdev3 00:09:00.873 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.873 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:00.873 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:00.873 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:00.873 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:00.873 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:00.873 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:00.873 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:00.873 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.873 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.873 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.873 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:00.873 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.873 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.873 [ 00:09:00.873 { 00:09:00.873 "name": "BaseBdev3", 00:09:00.873 "aliases": [ 00:09:00.873 "6c591a0a-ad51-4118-9875-24f83b2c9f2a" 00:09:00.873 ], 00:09:00.873 "product_name": "Malloc disk", 00:09:00.873 "block_size": 512, 00:09:00.873 "num_blocks": 65536, 00:09:00.873 "uuid": "6c591a0a-ad51-4118-9875-24f83b2c9f2a", 00:09:00.873 "assigned_rate_limits": { 00:09:00.873 "rw_ios_per_sec": 0, 00:09:00.873 "rw_mbytes_per_sec": 0, 00:09:00.873 "r_mbytes_per_sec": 0, 00:09:00.873 "w_mbytes_per_sec": 0 00:09:00.873 }, 00:09:00.873 "claimed": false, 00:09:00.873 "zoned": false, 00:09:00.873 "supported_io_types": { 00:09:00.873 "read": true, 00:09:00.873 "write": true, 00:09:00.873 "unmap": true, 00:09:00.873 "flush": true, 00:09:00.873 "reset": true, 00:09:00.873 "nvme_admin": false, 00:09:00.873 "nvme_io": false, 00:09:00.873 "nvme_io_md": false, 00:09:00.873 "write_zeroes": true, 00:09:00.873 "zcopy": true, 00:09:00.873 "get_zone_info": false, 00:09:00.873 "zone_management": false, 00:09:00.873 "zone_append": false, 00:09:00.873 "compare": false, 00:09:00.873 "compare_and_write": false, 00:09:00.873 "abort": true, 00:09:00.873 "seek_hole": false, 00:09:00.873 "seek_data": false, 00:09:00.873 "copy": true, 00:09:00.873 "nvme_iov_md": false 00:09:00.873 }, 00:09:00.873 "memory_domains": [ 00:09:00.873 { 00:09:01.132 "dma_device_id": "system", 00:09:01.132 "dma_device_type": 1 00:09:01.132 }, 00:09:01.132 { 00:09:01.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.132 "dma_device_type": 2 00:09:01.132 } 00:09:01.132 ], 00:09:01.132 "driver_specific": {} 00:09:01.132 } 00:09:01.132 ] 00:09:01.132 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.132 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:01.132 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:01.132 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:01.132 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:01.132 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.132 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.132 [2024-10-15 09:08:18.780135] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:01.132 [2024-10-15 09:08:18.780305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:01.132 [2024-10-15 09:08:18.780363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:01.132 [2024-10-15 09:08:18.782631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:01.132 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.132 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:01.132 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.132 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.132 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.132 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.132 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.132 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.132 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.132 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.132 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.132 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.132 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.132 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.132 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.132 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.132 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.133 "name": "Existed_Raid", 00:09:01.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.133 "strip_size_kb": 64, 00:09:01.133 "state": "configuring", 00:09:01.133 "raid_level": "concat", 00:09:01.133 "superblock": false, 00:09:01.133 "num_base_bdevs": 3, 00:09:01.133 "num_base_bdevs_discovered": 2, 00:09:01.133 "num_base_bdevs_operational": 3, 00:09:01.133 "base_bdevs_list": [ 00:09:01.133 { 00:09:01.133 "name": "BaseBdev1", 00:09:01.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.133 "is_configured": false, 00:09:01.133 "data_offset": 0, 00:09:01.133 "data_size": 0 00:09:01.133 }, 00:09:01.133 { 00:09:01.133 "name": "BaseBdev2", 00:09:01.133 "uuid": "5339c242-9761-4ba5-976c-ce78e1cc2b90", 00:09:01.133 "is_configured": true, 00:09:01.133 "data_offset": 0, 00:09:01.133 "data_size": 65536 00:09:01.133 }, 00:09:01.133 { 00:09:01.133 "name": "BaseBdev3", 00:09:01.133 "uuid": "6c591a0a-ad51-4118-9875-24f83b2c9f2a", 00:09:01.133 "is_configured": true, 00:09:01.133 "data_offset": 0, 00:09:01.133 "data_size": 65536 00:09:01.133 } 00:09:01.133 ] 00:09:01.133 }' 00:09:01.133 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.133 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.392 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:01.392 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.392 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.392 [2024-10-15 09:08:19.203348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:01.392 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.392 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:01.392 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.392 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.392 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.392 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.392 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.392 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.392 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.392 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.392 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.392 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.392 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.392 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.392 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.392 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.392 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.392 "name": "Existed_Raid", 00:09:01.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.392 "strip_size_kb": 64, 00:09:01.392 "state": "configuring", 00:09:01.392 "raid_level": "concat", 00:09:01.392 "superblock": false, 00:09:01.392 "num_base_bdevs": 3, 00:09:01.392 "num_base_bdevs_discovered": 1, 00:09:01.392 "num_base_bdevs_operational": 3, 00:09:01.392 "base_bdevs_list": [ 00:09:01.392 { 00:09:01.392 "name": "BaseBdev1", 00:09:01.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.392 "is_configured": false, 00:09:01.392 "data_offset": 0, 00:09:01.392 "data_size": 0 00:09:01.392 }, 00:09:01.392 { 00:09:01.392 "name": null, 00:09:01.392 "uuid": "5339c242-9761-4ba5-976c-ce78e1cc2b90", 00:09:01.392 "is_configured": false, 00:09:01.392 "data_offset": 0, 00:09:01.392 "data_size": 65536 00:09:01.392 }, 00:09:01.392 { 00:09:01.392 "name": "BaseBdev3", 00:09:01.392 "uuid": "6c591a0a-ad51-4118-9875-24f83b2c9f2a", 00:09:01.392 "is_configured": true, 00:09:01.392 "data_offset": 0, 00:09:01.392 "data_size": 65536 00:09:01.392 } 00:09:01.392 ] 00:09:01.392 }' 00:09:01.392 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.392 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.961 [2024-10-15 09:08:19.790863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:01.961 BaseBdev1 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.961 [ 00:09:01.961 { 00:09:01.961 "name": "BaseBdev1", 00:09:01.961 "aliases": [ 00:09:01.961 "19688d53-3214-4e39-a858-713e5ba4de58" 00:09:01.961 ], 00:09:01.961 "product_name": "Malloc disk", 00:09:01.961 "block_size": 512, 00:09:01.961 "num_blocks": 65536, 00:09:01.961 "uuid": "19688d53-3214-4e39-a858-713e5ba4de58", 00:09:01.961 "assigned_rate_limits": { 00:09:01.961 "rw_ios_per_sec": 0, 00:09:01.961 "rw_mbytes_per_sec": 0, 00:09:01.961 "r_mbytes_per_sec": 0, 00:09:01.961 "w_mbytes_per_sec": 0 00:09:01.961 }, 00:09:01.961 "claimed": true, 00:09:01.961 "claim_type": "exclusive_write", 00:09:01.961 "zoned": false, 00:09:01.961 "supported_io_types": { 00:09:01.961 "read": true, 00:09:01.961 "write": true, 00:09:01.961 "unmap": true, 00:09:01.961 "flush": true, 00:09:01.961 "reset": true, 00:09:01.961 "nvme_admin": false, 00:09:01.961 "nvme_io": false, 00:09:01.961 "nvme_io_md": false, 00:09:01.961 "write_zeroes": true, 00:09:01.961 "zcopy": true, 00:09:01.961 "get_zone_info": false, 00:09:01.961 "zone_management": false, 00:09:01.961 "zone_append": false, 00:09:01.961 "compare": false, 00:09:01.961 "compare_and_write": false, 00:09:01.961 "abort": true, 00:09:01.961 "seek_hole": false, 00:09:01.961 "seek_data": false, 00:09:01.961 "copy": true, 00:09:01.961 "nvme_iov_md": false 00:09:01.961 }, 00:09:01.961 "memory_domains": [ 00:09:01.961 { 00:09:01.961 "dma_device_id": "system", 00:09:01.961 "dma_device_type": 1 00:09:01.961 }, 00:09:01.961 { 00:09:01.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.961 "dma_device_type": 2 00:09:01.961 } 00:09:01.961 ], 00:09:01.961 "driver_specific": {} 00:09:01.961 } 00:09:01.961 ] 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.961 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.220 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.220 "name": "Existed_Raid", 00:09:02.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.220 "strip_size_kb": 64, 00:09:02.220 "state": "configuring", 00:09:02.220 "raid_level": "concat", 00:09:02.220 "superblock": false, 00:09:02.220 "num_base_bdevs": 3, 00:09:02.220 "num_base_bdevs_discovered": 2, 00:09:02.220 "num_base_bdevs_operational": 3, 00:09:02.220 "base_bdevs_list": [ 00:09:02.220 { 00:09:02.220 "name": "BaseBdev1", 00:09:02.220 "uuid": "19688d53-3214-4e39-a858-713e5ba4de58", 00:09:02.220 "is_configured": true, 00:09:02.220 "data_offset": 0, 00:09:02.220 "data_size": 65536 00:09:02.220 }, 00:09:02.220 { 00:09:02.220 "name": null, 00:09:02.220 "uuid": "5339c242-9761-4ba5-976c-ce78e1cc2b90", 00:09:02.220 "is_configured": false, 00:09:02.220 "data_offset": 0, 00:09:02.220 "data_size": 65536 00:09:02.220 }, 00:09:02.220 { 00:09:02.220 "name": "BaseBdev3", 00:09:02.220 "uuid": "6c591a0a-ad51-4118-9875-24f83b2c9f2a", 00:09:02.220 "is_configured": true, 00:09:02.220 "data_offset": 0, 00:09:02.220 "data_size": 65536 00:09:02.220 } 00:09:02.220 ] 00:09:02.220 }' 00:09:02.220 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.220 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.481 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.481 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.481 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.481 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:02.481 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.481 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:02.481 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:02.481 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.481 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.481 [2024-10-15 09:08:20.350091] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:02.481 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.481 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:02.481 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.481 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.481 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:02.481 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.481 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.481 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.481 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.481 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.481 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.481 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.481 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.481 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.481 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.481 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.741 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.741 "name": "Existed_Raid", 00:09:02.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.741 "strip_size_kb": 64, 00:09:02.741 "state": "configuring", 00:09:02.741 "raid_level": "concat", 00:09:02.741 "superblock": false, 00:09:02.741 "num_base_bdevs": 3, 00:09:02.741 "num_base_bdevs_discovered": 1, 00:09:02.741 "num_base_bdevs_operational": 3, 00:09:02.741 "base_bdevs_list": [ 00:09:02.741 { 00:09:02.741 "name": "BaseBdev1", 00:09:02.741 "uuid": "19688d53-3214-4e39-a858-713e5ba4de58", 00:09:02.741 "is_configured": true, 00:09:02.741 "data_offset": 0, 00:09:02.741 "data_size": 65536 00:09:02.741 }, 00:09:02.741 { 00:09:02.741 "name": null, 00:09:02.741 "uuid": "5339c242-9761-4ba5-976c-ce78e1cc2b90", 00:09:02.741 "is_configured": false, 00:09:02.741 "data_offset": 0, 00:09:02.741 "data_size": 65536 00:09:02.741 }, 00:09:02.741 { 00:09:02.741 "name": null, 00:09:02.741 "uuid": "6c591a0a-ad51-4118-9875-24f83b2c9f2a", 00:09:02.741 "is_configured": false, 00:09:02.741 "data_offset": 0, 00:09:02.741 "data_size": 65536 00:09:02.741 } 00:09:02.741 ] 00:09:02.741 }' 00:09:02.741 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.741 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.000 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.000 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:03.000 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.000 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.000 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.000 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:03.000 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:03.000 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.000 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.000 [2024-10-15 09:08:20.869256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:03.000 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.000 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:03.000 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.000 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.000 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.000 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.000 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.000 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.000 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.000 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.000 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.000 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.000 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.000 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.000 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.260 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.260 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.260 "name": "Existed_Raid", 00:09:03.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.260 "strip_size_kb": 64, 00:09:03.260 "state": "configuring", 00:09:03.260 "raid_level": "concat", 00:09:03.260 "superblock": false, 00:09:03.260 "num_base_bdevs": 3, 00:09:03.260 "num_base_bdevs_discovered": 2, 00:09:03.260 "num_base_bdevs_operational": 3, 00:09:03.260 "base_bdevs_list": [ 00:09:03.260 { 00:09:03.260 "name": "BaseBdev1", 00:09:03.260 "uuid": "19688d53-3214-4e39-a858-713e5ba4de58", 00:09:03.260 "is_configured": true, 00:09:03.260 "data_offset": 0, 00:09:03.260 "data_size": 65536 00:09:03.260 }, 00:09:03.260 { 00:09:03.260 "name": null, 00:09:03.260 "uuid": "5339c242-9761-4ba5-976c-ce78e1cc2b90", 00:09:03.260 "is_configured": false, 00:09:03.260 "data_offset": 0, 00:09:03.260 "data_size": 65536 00:09:03.260 }, 00:09:03.260 { 00:09:03.260 "name": "BaseBdev3", 00:09:03.260 "uuid": "6c591a0a-ad51-4118-9875-24f83b2c9f2a", 00:09:03.260 "is_configured": true, 00:09:03.260 "data_offset": 0, 00:09:03.260 "data_size": 65536 00:09:03.260 } 00:09:03.260 ] 00:09:03.260 }' 00:09:03.260 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.260 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.520 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.520 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.520 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.520 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:03.520 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.520 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:03.520 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:03.520 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.520 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.520 [2024-10-15 09:08:21.396562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:03.780 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.780 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:03.780 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.780 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.780 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.780 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.780 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.780 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.780 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.780 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.780 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.780 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.780 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.780 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.780 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.780 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.780 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.780 "name": "Existed_Raid", 00:09:03.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.780 "strip_size_kb": 64, 00:09:03.780 "state": "configuring", 00:09:03.780 "raid_level": "concat", 00:09:03.780 "superblock": false, 00:09:03.780 "num_base_bdevs": 3, 00:09:03.780 "num_base_bdevs_discovered": 1, 00:09:03.780 "num_base_bdevs_operational": 3, 00:09:03.780 "base_bdevs_list": [ 00:09:03.780 { 00:09:03.780 "name": null, 00:09:03.780 "uuid": "19688d53-3214-4e39-a858-713e5ba4de58", 00:09:03.780 "is_configured": false, 00:09:03.780 "data_offset": 0, 00:09:03.780 "data_size": 65536 00:09:03.780 }, 00:09:03.780 { 00:09:03.780 "name": null, 00:09:03.780 "uuid": "5339c242-9761-4ba5-976c-ce78e1cc2b90", 00:09:03.780 "is_configured": false, 00:09:03.780 "data_offset": 0, 00:09:03.780 "data_size": 65536 00:09:03.780 }, 00:09:03.780 { 00:09:03.780 "name": "BaseBdev3", 00:09:03.780 "uuid": "6c591a0a-ad51-4118-9875-24f83b2c9f2a", 00:09:03.780 "is_configured": true, 00:09:03.780 "data_offset": 0, 00:09:03.780 "data_size": 65536 00:09:03.780 } 00:09:03.780 ] 00:09:03.780 }' 00:09:03.780 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.780 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.349 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.349 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:04.349 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.349 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.349 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.349 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:04.349 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:04.349 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.349 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.349 [2024-10-15 09:08:21.984111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:04.349 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.350 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:04.350 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.350 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.350 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.350 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.350 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.350 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.350 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.350 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.350 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.350 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.350 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.350 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.350 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.350 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.350 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.350 "name": "Existed_Raid", 00:09:04.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.350 "strip_size_kb": 64, 00:09:04.350 "state": "configuring", 00:09:04.350 "raid_level": "concat", 00:09:04.350 "superblock": false, 00:09:04.350 "num_base_bdevs": 3, 00:09:04.350 "num_base_bdevs_discovered": 2, 00:09:04.350 "num_base_bdevs_operational": 3, 00:09:04.350 "base_bdevs_list": [ 00:09:04.350 { 00:09:04.350 "name": null, 00:09:04.350 "uuid": "19688d53-3214-4e39-a858-713e5ba4de58", 00:09:04.350 "is_configured": false, 00:09:04.350 "data_offset": 0, 00:09:04.350 "data_size": 65536 00:09:04.350 }, 00:09:04.350 { 00:09:04.350 "name": "BaseBdev2", 00:09:04.350 "uuid": "5339c242-9761-4ba5-976c-ce78e1cc2b90", 00:09:04.350 "is_configured": true, 00:09:04.350 "data_offset": 0, 00:09:04.350 "data_size": 65536 00:09:04.350 }, 00:09:04.350 { 00:09:04.350 "name": "BaseBdev3", 00:09:04.350 "uuid": "6c591a0a-ad51-4118-9875-24f83b2c9f2a", 00:09:04.350 "is_configured": true, 00:09:04.350 "data_offset": 0, 00:09:04.350 "data_size": 65536 00:09:04.350 } 00:09:04.350 ] 00:09:04.350 }' 00:09:04.350 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.350 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.610 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.610 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.610 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:04.610 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.610 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.610 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:04.610 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.610 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:04.610 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.610 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.870 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.870 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 19688d53-3214-4e39-a858-713e5ba4de58 00:09:04.870 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.870 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.871 [2024-10-15 09:08:22.582551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:04.871 [2024-10-15 09:08:22.582757] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:04.871 [2024-10-15 09:08:22.582783] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:04.871 [2024-10-15 09:08:22.583056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:04.871 [2024-10-15 09:08:22.583222] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:04.871 [2024-10-15 09:08:22.583232] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:04.871 [2024-10-15 09:08:22.583511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:04.871 NewBaseBdev 00:09:04.871 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.871 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:04.871 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:04.871 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:04.871 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:04.871 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:04.871 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:04.871 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:04.871 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.871 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.871 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.871 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:04.871 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.871 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.871 [ 00:09:04.871 { 00:09:04.871 "name": "NewBaseBdev", 00:09:04.871 "aliases": [ 00:09:04.871 "19688d53-3214-4e39-a858-713e5ba4de58" 00:09:04.871 ], 00:09:04.871 "product_name": "Malloc disk", 00:09:04.871 "block_size": 512, 00:09:04.871 "num_blocks": 65536, 00:09:04.871 "uuid": "19688d53-3214-4e39-a858-713e5ba4de58", 00:09:04.871 "assigned_rate_limits": { 00:09:04.871 "rw_ios_per_sec": 0, 00:09:04.871 "rw_mbytes_per_sec": 0, 00:09:04.871 "r_mbytes_per_sec": 0, 00:09:04.871 "w_mbytes_per_sec": 0 00:09:04.871 }, 00:09:04.871 "claimed": true, 00:09:04.871 "claim_type": "exclusive_write", 00:09:04.871 "zoned": false, 00:09:04.871 "supported_io_types": { 00:09:04.871 "read": true, 00:09:04.871 "write": true, 00:09:04.871 "unmap": true, 00:09:04.871 "flush": true, 00:09:04.871 "reset": true, 00:09:04.871 "nvme_admin": false, 00:09:04.871 "nvme_io": false, 00:09:04.871 "nvme_io_md": false, 00:09:04.871 "write_zeroes": true, 00:09:04.871 "zcopy": true, 00:09:04.871 "get_zone_info": false, 00:09:04.871 "zone_management": false, 00:09:04.871 "zone_append": false, 00:09:04.871 "compare": false, 00:09:04.871 "compare_and_write": false, 00:09:04.871 "abort": true, 00:09:04.871 "seek_hole": false, 00:09:04.871 "seek_data": false, 00:09:04.871 "copy": true, 00:09:04.871 "nvme_iov_md": false 00:09:04.871 }, 00:09:04.871 "memory_domains": [ 00:09:04.871 { 00:09:04.871 "dma_device_id": "system", 00:09:04.871 "dma_device_type": 1 00:09:04.871 }, 00:09:04.871 { 00:09:04.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.871 "dma_device_type": 2 00:09:04.871 } 00:09:04.871 ], 00:09:04.871 "driver_specific": {} 00:09:04.871 } 00:09:04.871 ] 00:09:04.871 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.871 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:04.871 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:04.871 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.871 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:04.871 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.871 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.871 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.871 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.871 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.871 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.871 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.871 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.871 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.871 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.871 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.871 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.871 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.871 "name": "Existed_Raid", 00:09:04.871 "uuid": "c34ecc20-f405-494e-a88e-42b1b7ce6b53", 00:09:04.871 "strip_size_kb": 64, 00:09:04.871 "state": "online", 00:09:04.871 "raid_level": "concat", 00:09:04.871 "superblock": false, 00:09:04.871 "num_base_bdevs": 3, 00:09:04.871 "num_base_bdevs_discovered": 3, 00:09:04.871 "num_base_bdevs_operational": 3, 00:09:04.871 "base_bdevs_list": [ 00:09:04.871 { 00:09:04.871 "name": "NewBaseBdev", 00:09:04.871 "uuid": "19688d53-3214-4e39-a858-713e5ba4de58", 00:09:04.871 "is_configured": true, 00:09:04.871 "data_offset": 0, 00:09:04.871 "data_size": 65536 00:09:04.871 }, 00:09:04.871 { 00:09:04.871 "name": "BaseBdev2", 00:09:04.871 "uuid": "5339c242-9761-4ba5-976c-ce78e1cc2b90", 00:09:04.871 "is_configured": true, 00:09:04.871 "data_offset": 0, 00:09:04.871 "data_size": 65536 00:09:04.871 }, 00:09:04.871 { 00:09:04.871 "name": "BaseBdev3", 00:09:04.871 "uuid": "6c591a0a-ad51-4118-9875-24f83b2c9f2a", 00:09:04.871 "is_configured": true, 00:09:04.871 "data_offset": 0, 00:09:04.871 "data_size": 65536 00:09:04.871 } 00:09:04.871 ] 00:09:04.871 }' 00:09:04.871 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.871 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.439 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:05.440 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:05.440 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:05.440 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:05.440 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:05.440 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:05.440 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:05.440 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:05.440 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.440 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.440 [2024-10-15 09:08:23.102123] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:05.440 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.440 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:05.440 "name": "Existed_Raid", 00:09:05.440 "aliases": [ 00:09:05.440 "c34ecc20-f405-494e-a88e-42b1b7ce6b53" 00:09:05.440 ], 00:09:05.440 "product_name": "Raid Volume", 00:09:05.440 "block_size": 512, 00:09:05.440 "num_blocks": 196608, 00:09:05.440 "uuid": "c34ecc20-f405-494e-a88e-42b1b7ce6b53", 00:09:05.440 "assigned_rate_limits": { 00:09:05.440 "rw_ios_per_sec": 0, 00:09:05.440 "rw_mbytes_per_sec": 0, 00:09:05.440 "r_mbytes_per_sec": 0, 00:09:05.440 "w_mbytes_per_sec": 0 00:09:05.440 }, 00:09:05.440 "claimed": false, 00:09:05.440 "zoned": false, 00:09:05.440 "supported_io_types": { 00:09:05.440 "read": true, 00:09:05.440 "write": true, 00:09:05.440 "unmap": true, 00:09:05.440 "flush": true, 00:09:05.440 "reset": true, 00:09:05.440 "nvme_admin": false, 00:09:05.440 "nvme_io": false, 00:09:05.440 "nvme_io_md": false, 00:09:05.440 "write_zeroes": true, 00:09:05.440 "zcopy": false, 00:09:05.440 "get_zone_info": false, 00:09:05.440 "zone_management": false, 00:09:05.440 "zone_append": false, 00:09:05.440 "compare": false, 00:09:05.440 "compare_and_write": false, 00:09:05.440 "abort": false, 00:09:05.440 "seek_hole": false, 00:09:05.440 "seek_data": false, 00:09:05.440 "copy": false, 00:09:05.440 "nvme_iov_md": false 00:09:05.440 }, 00:09:05.440 "memory_domains": [ 00:09:05.440 { 00:09:05.440 "dma_device_id": "system", 00:09:05.440 "dma_device_type": 1 00:09:05.440 }, 00:09:05.440 { 00:09:05.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.440 "dma_device_type": 2 00:09:05.440 }, 00:09:05.440 { 00:09:05.440 "dma_device_id": "system", 00:09:05.440 "dma_device_type": 1 00:09:05.440 }, 00:09:05.440 { 00:09:05.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.440 "dma_device_type": 2 00:09:05.440 }, 00:09:05.440 { 00:09:05.440 "dma_device_id": "system", 00:09:05.440 "dma_device_type": 1 00:09:05.440 }, 00:09:05.440 { 00:09:05.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.440 "dma_device_type": 2 00:09:05.440 } 00:09:05.440 ], 00:09:05.440 "driver_specific": { 00:09:05.440 "raid": { 00:09:05.440 "uuid": "c34ecc20-f405-494e-a88e-42b1b7ce6b53", 00:09:05.440 "strip_size_kb": 64, 00:09:05.440 "state": "online", 00:09:05.440 "raid_level": "concat", 00:09:05.440 "superblock": false, 00:09:05.440 "num_base_bdevs": 3, 00:09:05.440 "num_base_bdevs_discovered": 3, 00:09:05.440 "num_base_bdevs_operational": 3, 00:09:05.440 "base_bdevs_list": [ 00:09:05.440 { 00:09:05.440 "name": "NewBaseBdev", 00:09:05.440 "uuid": "19688d53-3214-4e39-a858-713e5ba4de58", 00:09:05.440 "is_configured": true, 00:09:05.440 "data_offset": 0, 00:09:05.440 "data_size": 65536 00:09:05.440 }, 00:09:05.440 { 00:09:05.440 "name": "BaseBdev2", 00:09:05.440 "uuid": "5339c242-9761-4ba5-976c-ce78e1cc2b90", 00:09:05.440 "is_configured": true, 00:09:05.440 "data_offset": 0, 00:09:05.440 "data_size": 65536 00:09:05.440 }, 00:09:05.440 { 00:09:05.440 "name": "BaseBdev3", 00:09:05.440 "uuid": "6c591a0a-ad51-4118-9875-24f83b2c9f2a", 00:09:05.440 "is_configured": true, 00:09:05.440 "data_offset": 0, 00:09:05.440 "data_size": 65536 00:09:05.440 } 00:09:05.440 ] 00:09:05.440 } 00:09:05.440 } 00:09:05.440 }' 00:09:05.440 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:05.440 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:05.440 BaseBdev2 00:09:05.440 BaseBdev3' 00:09:05.440 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.440 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:05.440 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.440 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:05.440 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.440 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.440 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.440 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.440 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.440 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.440 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.440 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.440 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:05.440 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.440 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.440 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.700 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.701 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.701 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.701 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:05.701 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.701 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.701 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.701 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.701 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.701 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.701 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:05.701 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.701 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.701 [2024-10-15 09:08:23.421233] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:05.701 [2024-10-15 09:08:23.421268] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:05.701 [2024-10-15 09:08:23.421368] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:05.701 [2024-10-15 09:08:23.421435] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:05.701 [2024-10-15 09:08:23.421450] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:05.701 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.701 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65681 00:09:05.701 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 65681 ']' 00:09:05.701 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 65681 00:09:05.701 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:05.701 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:05.701 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65681 00:09:05.701 killing process with pid 65681 00:09:05.701 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:05.701 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:05.701 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65681' 00:09:05.701 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 65681 00:09:05.701 [2024-10-15 09:08:23.476323] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:05.701 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 65681 00:09:05.961 [2024-10-15 09:08:23.810928] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:07.339 09:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:07.339 00:09:07.339 real 0m11.204s 00:09:07.339 user 0m17.808s 00:09:07.339 sys 0m1.965s 00:09:07.339 ************************************ 00:09:07.339 END TEST raid_state_function_test 00:09:07.339 ************************************ 00:09:07.339 09:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:07.339 09:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.339 09:08:25 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:07.339 09:08:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:07.339 09:08:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:07.339 09:08:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:07.339 ************************************ 00:09:07.339 START TEST raid_state_function_test_sb 00:09:07.339 ************************************ 00:09:07.339 09:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:09:07.339 09:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:07.339 09:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:07.339 09:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:07.339 09:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:07.339 09:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:07.339 09:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.339 09:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:07.339 09:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:07.339 09:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.339 09:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:07.339 09:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:07.340 09:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.340 09:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:07.340 09:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:07.340 09:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.340 09:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:07.340 09:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:07.340 09:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:07.340 09:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:07.340 09:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:07.340 09:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:07.340 09:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:07.340 09:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:07.340 09:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:07.340 09:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:07.340 09:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:07.340 09:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66308 00:09:07.340 09:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:07.340 09:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66308' 00:09:07.340 Process raid pid: 66308 00:09:07.340 09:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66308 00:09:07.340 09:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 66308 ']' 00:09:07.340 09:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.340 09:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:07.340 09:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.340 09:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:07.340 09:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.340 [2024-10-15 09:08:25.147728] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:09:07.340 [2024-10-15 09:08:25.147929] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.598 [2024-10-15 09:08:25.314830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.598 [2024-10-15 09:08:25.441376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.857 [2024-10-15 09:08:25.657575] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:07.857 [2024-10-15 09:08:25.657757] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.115 09:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:08.115 09:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:08.115 09:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:08.115 09:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.115 09:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.115 [2024-10-15 09:08:26.004627] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:08.115 [2024-10-15 09:08:26.004767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:08.115 [2024-10-15 09:08:26.004805] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:08.115 [2024-10-15 09:08:26.004834] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:08.115 [2024-10-15 09:08:26.004857] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:08.115 [2024-10-15 09:08:26.004894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:08.115 09:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.115 09:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:08.115 09:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.115 09:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.375 09:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.375 09:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.375 09:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.375 09:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.375 09:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.375 09:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.375 09:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.375 09:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.375 09:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.375 09:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.375 09:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.375 09:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.375 09:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.375 "name": "Existed_Raid", 00:09:08.375 "uuid": "e1a8405d-6692-43a6-b8ab-dffc6d9fd577", 00:09:08.375 "strip_size_kb": 64, 00:09:08.375 "state": "configuring", 00:09:08.375 "raid_level": "concat", 00:09:08.375 "superblock": true, 00:09:08.375 "num_base_bdevs": 3, 00:09:08.375 "num_base_bdevs_discovered": 0, 00:09:08.375 "num_base_bdevs_operational": 3, 00:09:08.375 "base_bdevs_list": [ 00:09:08.375 { 00:09:08.375 "name": "BaseBdev1", 00:09:08.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.375 "is_configured": false, 00:09:08.376 "data_offset": 0, 00:09:08.376 "data_size": 0 00:09:08.376 }, 00:09:08.376 { 00:09:08.376 "name": "BaseBdev2", 00:09:08.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.376 "is_configured": false, 00:09:08.376 "data_offset": 0, 00:09:08.376 "data_size": 0 00:09:08.376 }, 00:09:08.376 { 00:09:08.376 "name": "BaseBdev3", 00:09:08.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.376 "is_configured": false, 00:09:08.376 "data_offset": 0, 00:09:08.376 "data_size": 0 00:09:08.376 } 00:09:08.376 ] 00:09:08.376 }' 00:09:08.376 09:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.376 09:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.635 09:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:08.635 09:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.635 09:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.635 [2024-10-15 09:08:26.467796] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:08.635 [2024-10-15 09:08:26.467839] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:08.635 09:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.635 09:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:08.635 09:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.635 09:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.635 [2024-10-15 09:08:26.479808] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:08.635 [2024-10-15 09:08:26.479914] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:08.635 [2024-10-15 09:08:26.479952] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:08.635 [2024-10-15 09:08:26.479996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:08.635 [2024-10-15 09:08:26.480018] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:08.635 [2024-10-15 09:08:26.480049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:08.635 09:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.635 09:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:08.635 09:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.635 09:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.635 [2024-10-15 09:08:26.528266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:08.635 BaseBdev1 00:09:08.635 09:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.635 09:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:08.635 09:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:08.894 09:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:08.894 09:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:08.894 09:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:08.894 09:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:08.894 09:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:08.894 09:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.894 09:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.894 09:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.894 09:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:08.894 09:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.894 09:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.894 [ 00:09:08.894 { 00:09:08.894 "name": "BaseBdev1", 00:09:08.894 "aliases": [ 00:09:08.894 "d1bf8185-7c93-4951-a04f-57c3d3a1e8d8" 00:09:08.894 ], 00:09:08.894 "product_name": "Malloc disk", 00:09:08.894 "block_size": 512, 00:09:08.894 "num_blocks": 65536, 00:09:08.894 "uuid": "d1bf8185-7c93-4951-a04f-57c3d3a1e8d8", 00:09:08.894 "assigned_rate_limits": { 00:09:08.894 "rw_ios_per_sec": 0, 00:09:08.894 "rw_mbytes_per_sec": 0, 00:09:08.894 "r_mbytes_per_sec": 0, 00:09:08.894 "w_mbytes_per_sec": 0 00:09:08.894 }, 00:09:08.894 "claimed": true, 00:09:08.894 "claim_type": "exclusive_write", 00:09:08.894 "zoned": false, 00:09:08.894 "supported_io_types": { 00:09:08.894 "read": true, 00:09:08.894 "write": true, 00:09:08.894 "unmap": true, 00:09:08.894 "flush": true, 00:09:08.894 "reset": true, 00:09:08.894 "nvme_admin": false, 00:09:08.894 "nvme_io": false, 00:09:08.894 "nvme_io_md": false, 00:09:08.894 "write_zeroes": true, 00:09:08.894 "zcopy": true, 00:09:08.894 "get_zone_info": false, 00:09:08.894 "zone_management": false, 00:09:08.894 "zone_append": false, 00:09:08.894 "compare": false, 00:09:08.894 "compare_and_write": false, 00:09:08.894 "abort": true, 00:09:08.894 "seek_hole": false, 00:09:08.894 "seek_data": false, 00:09:08.894 "copy": true, 00:09:08.894 "nvme_iov_md": false 00:09:08.894 }, 00:09:08.894 "memory_domains": [ 00:09:08.894 { 00:09:08.894 "dma_device_id": "system", 00:09:08.894 "dma_device_type": 1 00:09:08.894 }, 00:09:08.894 { 00:09:08.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.894 "dma_device_type": 2 00:09:08.894 } 00:09:08.894 ], 00:09:08.894 "driver_specific": {} 00:09:08.894 } 00:09:08.894 ] 00:09:08.894 09:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.894 09:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:08.894 09:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:08.894 09:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.894 09:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.894 09:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.894 09:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.894 09:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.894 09:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.894 09:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.894 09:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.894 09:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.894 09:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.894 09:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.895 09:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.895 09:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.895 09:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.895 09:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.895 "name": "Existed_Raid", 00:09:08.895 "uuid": "c0afa092-6d26-4a78-82bd-13e2d079d7be", 00:09:08.895 "strip_size_kb": 64, 00:09:08.895 "state": "configuring", 00:09:08.895 "raid_level": "concat", 00:09:08.895 "superblock": true, 00:09:08.895 "num_base_bdevs": 3, 00:09:08.895 "num_base_bdevs_discovered": 1, 00:09:08.895 "num_base_bdevs_operational": 3, 00:09:08.895 "base_bdevs_list": [ 00:09:08.895 { 00:09:08.895 "name": "BaseBdev1", 00:09:08.895 "uuid": "d1bf8185-7c93-4951-a04f-57c3d3a1e8d8", 00:09:08.895 "is_configured": true, 00:09:08.895 "data_offset": 2048, 00:09:08.895 "data_size": 63488 00:09:08.895 }, 00:09:08.895 { 00:09:08.895 "name": "BaseBdev2", 00:09:08.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.895 "is_configured": false, 00:09:08.895 "data_offset": 0, 00:09:08.895 "data_size": 0 00:09:08.895 }, 00:09:08.895 { 00:09:08.895 "name": "BaseBdev3", 00:09:08.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.895 "is_configured": false, 00:09:08.895 "data_offset": 0, 00:09:08.895 "data_size": 0 00:09:08.895 } 00:09:08.895 ] 00:09:08.895 }' 00:09:08.895 09:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.895 09:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.154 09:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:09.154 09:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.154 09:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.154 [2024-10-15 09:08:27.003507] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:09.154 [2024-10-15 09:08:27.003565] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:09.154 09:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.154 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:09.154 09:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.154 09:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.154 [2024-10-15 09:08:27.015562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.154 [2024-10-15 09:08:27.017704] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:09.154 [2024-10-15 09:08:27.017817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:09.154 [2024-10-15 09:08:27.017859] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:09.154 [2024-10-15 09:08:27.017887] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:09.154 09:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.154 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:09.154 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:09.154 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:09.154 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.154 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.154 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.154 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.154 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.154 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.154 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.154 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.154 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.154 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.154 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.154 09:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.154 09:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.154 09:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.413 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.413 "name": "Existed_Raid", 00:09:09.413 "uuid": "ef78c16d-1abf-446d-9d81-ba33769fcaf3", 00:09:09.413 "strip_size_kb": 64, 00:09:09.413 "state": "configuring", 00:09:09.413 "raid_level": "concat", 00:09:09.413 "superblock": true, 00:09:09.413 "num_base_bdevs": 3, 00:09:09.413 "num_base_bdevs_discovered": 1, 00:09:09.413 "num_base_bdevs_operational": 3, 00:09:09.413 "base_bdevs_list": [ 00:09:09.413 { 00:09:09.413 "name": "BaseBdev1", 00:09:09.413 "uuid": "d1bf8185-7c93-4951-a04f-57c3d3a1e8d8", 00:09:09.413 "is_configured": true, 00:09:09.413 "data_offset": 2048, 00:09:09.413 "data_size": 63488 00:09:09.413 }, 00:09:09.413 { 00:09:09.413 "name": "BaseBdev2", 00:09:09.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.413 "is_configured": false, 00:09:09.413 "data_offset": 0, 00:09:09.413 "data_size": 0 00:09:09.413 }, 00:09:09.413 { 00:09:09.413 "name": "BaseBdev3", 00:09:09.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.413 "is_configured": false, 00:09:09.413 "data_offset": 0, 00:09:09.413 "data_size": 0 00:09:09.413 } 00:09:09.413 ] 00:09:09.413 }' 00:09:09.413 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.413 09:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.671 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:09.671 09:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.671 09:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.671 [2024-10-15 09:08:27.525674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:09.671 BaseBdev2 00:09:09.671 09:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.671 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:09.671 09:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:09.671 09:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:09.671 09:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:09.671 09:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:09.671 09:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:09.671 09:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:09.671 09:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.671 09:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.671 09:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.672 09:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:09.672 09:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.672 09:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.672 [ 00:09:09.672 { 00:09:09.672 "name": "BaseBdev2", 00:09:09.672 "aliases": [ 00:09:09.672 "ebf395d7-c37b-48da-be7b-ef0915ecdd35" 00:09:09.672 ], 00:09:09.672 "product_name": "Malloc disk", 00:09:09.672 "block_size": 512, 00:09:09.672 "num_blocks": 65536, 00:09:09.672 "uuid": "ebf395d7-c37b-48da-be7b-ef0915ecdd35", 00:09:09.672 "assigned_rate_limits": { 00:09:09.672 "rw_ios_per_sec": 0, 00:09:09.672 "rw_mbytes_per_sec": 0, 00:09:09.672 "r_mbytes_per_sec": 0, 00:09:09.672 "w_mbytes_per_sec": 0 00:09:09.672 }, 00:09:09.672 "claimed": true, 00:09:09.672 "claim_type": "exclusive_write", 00:09:09.672 "zoned": false, 00:09:09.672 "supported_io_types": { 00:09:09.672 "read": true, 00:09:09.672 "write": true, 00:09:09.672 "unmap": true, 00:09:09.672 "flush": true, 00:09:09.672 "reset": true, 00:09:09.672 "nvme_admin": false, 00:09:09.672 "nvme_io": false, 00:09:09.672 "nvme_io_md": false, 00:09:09.672 "write_zeroes": true, 00:09:09.672 "zcopy": true, 00:09:09.672 "get_zone_info": false, 00:09:09.672 "zone_management": false, 00:09:09.672 "zone_append": false, 00:09:09.672 "compare": false, 00:09:09.672 "compare_and_write": false, 00:09:09.672 "abort": true, 00:09:09.672 "seek_hole": false, 00:09:09.672 "seek_data": false, 00:09:09.672 "copy": true, 00:09:09.672 "nvme_iov_md": false 00:09:09.672 }, 00:09:09.672 "memory_domains": [ 00:09:09.672 { 00:09:09.672 "dma_device_id": "system", 00:09:09.672 "dma_device_type": 1 00:09:09.672 }, 00:09:09.672 { 00:09:09.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.672 "dma_device_type": 2 00:09:09.672 } 00:09:09.672 ], 00:09:09.672 "driver_specific": {} 00:09:09.672 } 00:09:09.672 ] 00:09:09.672 09:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.931 09:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:09.931 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:09.931 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:09.931 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:09.931 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.931 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.931 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.931 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.931 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.931 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.931 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.931 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.931 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.931 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.931 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.931 09:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.931 09:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.931 09:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.931 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.931 "name": "Existed_Raid", 00:09:09.931 "uuid": "ef78c16d-1abf-446d-9d81-ba33769fcaf3", 00:09:09.931 "strip_size_kb": 64, 00:09:09.931 "state": "configuring", 00:09:09.931 "raid_level": "concat", 00:09:09.931 "superblock": true, 00:09:09.931 "num_base_bdevs": 3, 00:09:09.931 "num_base_bdevs_discovered": 2, 00:09:09.931 "num_base_bdevs_operational": 3, 00:09:09.931 "base_bdevs_list": [ 00:09:09.931 { 00:09:09.931 "name": "BaseBdev1", 00:09:09.931 "uuid": "d1bf8185-7c93-4951-a04f-57c3d3a1e8d8", 00:09:09.931 "is_configured": true, 00:09:09.931 "data_offset": 2048, 00:09:09.931 "data_size": 63488 00:09:09.931 }, 00:09:09.931 { 00:09:09.931 "name": "BaseBdev2", 00:09:09.931 "uuid": "ebf395d7-c37b-48da-be7b-ef0915ecdd35", 00:09:09.931 "is_configured": true, 00:09:09.931 "data_offset": 2048, 00:09:09.931 "data_size": 63488 00:09:09.931 }, 00:09:09.931 { 00:09:09.931 "name": "BaseBdev3", 00:09:09.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.931 "is_configured": false, 00:09:09.931 "data_offset": 0, 00:09:09.931 "data_size": 0 00:09:09.931 } 00:09:09.931 ] 00:09:09.931 }' 00:09:09.931 09:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.931 09:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.191 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:10.191 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.191 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.191 [2024-10-15 09:08:28.079712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:10.191 [2024-10-15 09:08:28.080150] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:10.191 [2024-10-15 09:08:28.080179] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:10.191 [2024-10-15 09:08:28.080462] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:10.191 [2024-10-15 09:08:28.080631] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:10.191 [2024-10-15 09:08:28.080640] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:10.191 BaseBdev3 00:09:10.191 [2024-10-15 09:08:28.080822] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.191 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.191 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:10.191 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:10.191 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:10.191 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:10.191 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:10.191 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:10.191 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:10.191 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.191 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.449 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.449 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:10.449 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.449 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.449 [ 00:09:10.449 { 00:09:10.449 "name": "BaseBdev3", 00:09:10.449 "aliases": [ 00:09:10.449 "f433a90b-bbab-49e8-996c-9aede777e023" 00:09:10.449 ], 00:09:10.449 "product_name": "Malloc disk", 00:09:10.449 "block_size": 512, 00:09:10.449 "num_blocks": 65536, 00:09:10.449 "uuid": "f433a90b-bbab-49e8-996c-9aede777e023", 00:09:10.449 "assigned_rate_limits": { 00:09:10.449 "rw_ios_per_sec": 0, 00:09:10.449 "rw_mbytes_per_sec": 0, 00:09:10.450 "r_mbytes_per_sec": 0, 00:09:10.450 "w_mbytes_per_sec": 0 00:09:10.450 }, 00:09:10.450 "claimed": true, 00:09:10.450 "claim_type": "exclusive_write", 00:09:10.450 "zoned": false, 00:09:10.450 "supported_io_types": { 00:09:10.450 "read": true, 00:09:10.450 "write": true, 00:09:10.450 "unmap": true, 00:09:10.450 "flush": true, 00:09:10.450 "reset": true, 00:09:10.450 "nvme_admin": false, 00:09:10.450 "nvme_io": false, 00:09:10.450 "nvme_io_md": false, 00:09:10.450 "write_zeroes": true, 00:09:10.450 "zcopy": true, 00:09:10.450 "get_zone_info": false, 00:09:10.450 "zone_management": false, 00:09:10.450 "zone_append": false, 00:09:10.450 "compare": false, 00:09:10.450 "compare_and_write": false, 00:09:10.450 "abort": true, 00:09:10.450 "seek_hole": false, 00:09:10.450 "seek_data": false, 00:09:10.450 "copy": true, 00:09:10.450 "nvme_iov_md": false 00:09:10.450 }, 00:09:10.450 "memory_domains": [ 00:09:10.450 { 00:09:10.450 "dma_device_id": "system", 00:09:10.450 "dma_device_type": 1 00:09:10.450 }, 00:09:10.450 { 00:09:10.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.450 "dma_device_type": 2 00:09:10.450 } 00:09:10.450 ], 00:09:10.450 "driver_specific": {} 00:09:10.450 } 00:09:10.450 ] 00:09:10.450 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.450 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:10.450 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:10.450 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:10.450 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:10.450 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.450 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.450 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.450 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.450 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.450 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.450 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.450 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.450 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.450 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.450 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.450 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.450 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.450 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.450 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.450 "name": "Existed_Raid", 00:09:10.450 "uuid": "ef78c16d-1abf-446d-9d81-ba33769fcaf3", 00:09:10.450 "strip_size_kb": 64, 00:09:10.450 "state": "online", 00:09:10.450 "raid_level": "concat", 00:09:10.450 "superblock": true, 00:09:10.450 "num_base_bdevs": 3, 00:09:10.450 "num_base_bdevs_discovered": 3, 00:09:10.450 "num_base_bdevs_operational": 3, 00:09:10.450 "base_bdevs_list": [ 00:09:10.450 { 00:09:10.450 "name": "BaseBdev1", 00:09:10.450 "uuid": "d1bf8185-7c93-4951-a04f-57c3d3a1e8d8", 00:09:10.450 "is_configured": true, 00:09:10.450 "data_offset": 2048, 00:09:10.450 "data_size": 63488 00:09:10.450 }, 00:09:10.450 { 00:09:10.450 "name": "BaseBdev2", 00:09:10.450 "uuid": "ebf395d7-c37b-48da-be7b-ef0915ecdd35", 00:09:10.450 "is_configured": true, 00:09:10.450 "data_offset": 2048, 00:09:10.450 "data_size": 63488 00:09:10.450 }, 00:09:10.450 { 00:09:10.450 "name": "BaseBdev3", 00:09:10.450 "uuid": "f433a90b-bbab-49e8-996c-9aede777e023", 00:09:10.450 "is_configured": true, 00:09:10.450 "data_offset": 2048, 00:09:10.450 "data_size": 63488 00:09:10.450 } 00:09:10.450 ] 00:09:10.450 }' 00:09:10.450 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.450 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.712 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:10.712 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:10.712 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:10.712 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:10.712 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:10.712 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:10.712 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:10.712 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.712 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.712 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:10.712 [2024-10-15 09:08:28.539395] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:10.712 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.712 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:10.712 "name": "Existed_Raid", 00:09:10.712 "aliases": [ 00:09:10.712 "ef78c16d-1abf-446d-9d81-ba33769fcaf3" 00:09:10.712 ], 00:09:10.712 "product_name": "Raid Volume", 00:09:10.712 "block_size": 512, 00:09:10.712 "num_blocks": 190464, 00:09:10.712 "uuid": "ef78c16d-1abf-446d-9d81-ba33769fcaf3", 00:09:10.712 "assigned_rate_limits": { 00:09:10.712 "rw_ios_per_sec": 0, 00:09:10.712 "rw_mbytes_per_sec": 0, 00:09:10.712 "r_mbytes_per_sec": 0, 00:09:10.712 "w_mbytes_per_sec": 0 00:09:10.712 }, 00:09:10.712 "claimed": false, 00:09:10.712 "zoned": false, 00:09:10.712 "supported_io_types": { 00:09:10.712 "read": true, 00:09:10.712 "write": true, 00:09:10.712 "unmap": true, 00:09:10.712 "flush": true, 00:09:10.712 "reset": true, 00:09:10.712 "nvme_admin": false, 00:09:10.712 "nvme_io": false, 00:09:10.712 "nvme_io_md": false, 00:09:10.712 "write_zeroes": true, 00:09:10.712 "zcopy": false, 00:09:10.712 "get_zone_info": false, 00:09:10.712 "zone_management": false, 00:09:10.712 "zone_append": false, 00:09:10.712 "compare": false, 00:09:10.712 "compare_and_write": false, 00:09:10.712 "abort": false, 00:09:10.712 "seek_hole": false, 00:09:10.712 "seek_data": false, 00:09:10.712 "copy": false, 00:09:10.712 "nvme_iov_md": false 00:09:10.712 }, 00:09:10.712 "memory_domains": [ 00:09:10.712 { 00:09:10.712 "dma_device_id": "system", 00:09:10.712 "dma_device_type": 1 00:09:10.712 }, 00:09:10.712 { 00:09:10.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.712 "dma_device_type": 2 00:09:10.712 }, 00:09:10.712 { 00:09:10.712 "dma_device_id": "system", 00:09:10.712 "dma_device_type": 1 00:09:10.712 }, 00:09:10.712 { 00:09:10.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.712 "dma_device_type": 2 00:09:10.712 }, 00:09:10.712 { 00:09:10.712 "dma_device_id": "system", 00:09:10.712 "dma_device_type": 1 00:09:10.712 }, 00:09:10.712 { 00:09:10.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.712 "dma_device_type": 2 00:09:10.712 } 00:09:10.712 ], 00:09:10.712 "driver_specific": { 00:09:10.712 "raid": { 00:09:10.712 "uuid": "ef78c16d-1abf-446d-9d81-ba33769fcaf3", 00:09:10.712 "strip_size_kb": 64, 00:09:10.712 "state": "online", 00:09:10.712 "raid_level": "concat", 00:09:10.712 "superblock": true, 00:09:10.712 "num_base_bdevs": 3, 00:09:10.712 "num_base_bdevs_discovered": 3, 00:09:10.712 "num_base_bdevs_operational": 3, 00:09:10.712 "base_bdevs_list": [ 00:09:10.712 { 00:09:10.712 "name": "BaseBdev1", 00:09:10.712 "uuid": "d1bf8185-7c93-4951-a04f-57c3d3a1e8d8", 00:09:10.712 "is_configured": true, 00:09:10.712 "data_offset": 2048, 00:09:10.712 "data_size": 63488 00:09:10.712 }, 00:09:10.712 { 00:09:10.712 "name": "BaseBdev2", 00:09:10.712 "uuid": "ebf395d7-c37b-48da-be7b-ef0915ecdd35", 00:09:10.712 "is_configured": true, 00:09:10.712 "data_offset": 2048, 00:09:10.712 "data_size": 63488 00:09:10.712 }, 00:09:10.712 { 00:09:10.712 "name": "BaseBdev3", 00:09:10.712 "uuid": "f433a90b-bbab-49e8-996c-9aede777e023", 00:09:10.712 "is_configured": true, 00:09:10.712 "data_offset": 2048, 00:09:10.712 "data_size": 63488 00:09:10.712 } 00:09:10.712 ] 00:09:10.712 } 00:09:10.712 } 00:09:10.712 }' 00:09:10.712 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:10.973 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:10.973 BaseBdev2 00:09:10.973 BaseBdev3' 00:09:10.973 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:10.973 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:10.973 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:10.973 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:10.973 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:10.973 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.973 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.973 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.973 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:10.973 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:10.973 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:10.973 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:10.973 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:10.973 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.973 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.973 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.973 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:10.973 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:10.973 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:10.973 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:10.973 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:10.973 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.973 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.973 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.973 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:10.973 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:10.973 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:10.973 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.973 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.973 [2024-10-15 09:08:28.838574] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:10.973 [2024-10-15 09:08:28.838614] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:10.973 [2024-10-15 09:08:28.838672] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.232 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.232 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:11.232 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:11.232 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:11.232 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:11.232 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:11.232 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:11.232 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.233 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:11.233 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.233 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.233 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:11.233 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.233 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.233 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.233 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.233 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.233 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.233 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.233 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.233 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.233 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.233 "name": "Existed_Raid", 00:09:11.233 "uuid": "ef78c16d-1abf-446d-9d81-ba33769fcaf3", 00:09:11.233 "strip_size_kb": 64, 00:09:11.233 "state": "offline", 00:09:11.233 "raid_level": "concat", 00:09:11.233 "superblock": true, 00:09:11.233 "num_base_bdevs": 3, 00:09:11.233 "num_base_bdevs_discovered": 2, 00:09:11.233 "num_base_bdevs_operational": 2, 00:09:11.233 "base_bdevs_list": [ 00:09:11.233 { 00:09:11.233 "name": null, 00:09:11.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.233 "is_configured": false, 00:09:11.233 "data_offset": 0, 00:09:11.233 "data_size": 63488 00:09:11.233 }, 00:09:11.233 { 00:09:11.233 "name": "BaseBdev2", 00:09:11.233 "uuid": "ebf395d7-c37b-48da-be7b-ef0915ecdd35", 00:09:11.233 "is_configured": true, 00:09:11.233 "data_offset": 2048, 00:09:11.233 "data_size": 63488 00:09:11.233 }, 00:09:11.233 { 00:09:11.233 "name": "BaseBdev3", 00:09:11.233 "uuid": "f433a90b-bbab-49e8-996c-9aede777e023", 00:09:11.233 "is_configured": true, 00:09:11.233 "data_offset": 2048, 00:09:11.233 "data_size": 63488 00:09:11.233 } 00:09:11.233 ] 00:09:11.233 }' 00:09:11.233 09:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.233 09:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.507 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:11.507 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:11.507 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.507 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:11.507 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.507 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.507 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.507 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:11.507 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:11.507 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:11.507 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.507 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.507 [2024-10-15 09:08:29.394995] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:11.778 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.778 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:11.778 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:11.778 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.778 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:11.778 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.778 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.778 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.778 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:11.778 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:11.778 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:11.778 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.778 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.778 [2024-10-15 09:08:29.558355] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:11.778 [2024-10-15 09:08:29.558508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:11.778 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.778 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:11.778 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:11.778 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.778 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.778 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.778 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:12.066 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.066 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:12.066 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:12.066 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:12.066 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:12.066 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.066 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:12.066 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.066 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.066 BaseBdev2 00:09:12.066 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.066 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:12.066 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:12.066 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:12.066 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:12.066 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:12.066 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:12.066 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:12.066 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.066 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.066 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.066 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:12.066 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.066 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.066 [ 00:09:12.066 { 00:09:12.066 "name": "BaseBdev2", 00:09:12.066 "aliases": [ 00:09:12.066 "687d36b4-764e-43cc-9b23-98f0f2fe8010" 00:09:12.066 ], 00:09:12.066 "product_name": "Malloc disk", 00:09:12.066 "block_size": 512, 00:09:12.066 "num_blocks": 65536, 00:09:12.066 "uuid": "687d36b4-764e-43cc-9b23-98f0f2fe8010", 00:09:12.066 "assigned_rate_limits": { 00:09:12.066 "rw_ios_per_sec": 0, 00:09:12.066 "rw_mbytes_per_sec": 0, 00:09:12.066 "r_mbytes_per_sec": 0, 00:09:12.066 "w_mbytes_per_sec": 0 00:09:12.066 }, 00:09:12.066 "claimed": false, 00:09:12.066 "zoned": false, 00:09:12.066 "supported_io_types": { 00:09:12.066 "read": true, 00:09:12.066 "write": true, 00:09:12.066 "unmap": true, 00:09:12.066 "flush": true, 00:09:12.066 "reset": true, 00:09:12.066 "nvme_admin": false, 00:09:12.066 "nvme_io": false, 00:09:12.066 "nvme_io_md": false, 00:09:12.066 "write_zeroes": true, 00:09:12.066 "zcopy": true, 00:09:12.066 "get_zone_info": false, 00:09:12.066 "zone_management": false, 00:09:12.066 "zone_append": false, 00:09:12.067 "compare": false, 00:09:12.067 "compare_and_write": false, 00:09:12.067 "abort": true, 00:09:12.067 "seek_hole": false, 00:09:12.067 "seek_data": false, 00:09:12.067 "copy": true, 00:09:12.067 "nvme_iov_md": false 00:09:12.067 }, 00:09:12.067 "memory_domains": [ 00:09:12.067 { 00:09:12.067 "dma_device_id": "system", 00:09:12.067 "dma_device_type": 1 00:09:12.067 }, 00:09:12.067 { 00:09:12.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.067 "dma_device_type": 2 00:09:12.067 } 00:09:12.067 ], 00:09:12.067 "driver_specific": {} 00:09:12.067 } 00:09:12.067 ] 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.067 BaseBdev3 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.067 [ 00:09:12.067 { 00:09:12.067 "name": "BaseBdev3", 00:09:12.067 "aliases": [ 00:09:12.067 "e254dcdf-bb5d-4c83-9779-f1721a6eff0c" 00:09:12.067 ], 00:09:12.067 "product_name": "Malloc disk", 00:09:12.067 "block_size": 512, 00:09:12.067 "num_blocks": 65536, 00:09:12.067 "uuid": "e254dcdf-bb5d-4c83-9779-f1721a6eff0c", 00:09:12.067 "assigned_rate_limits": { 00:09:12.067 "rw_ios_per_sec": 0, 00:09:12.067 "rw_mbytes_per_sec": 0, 00:09:12.067 "r_mbytes_per_sec": 0, 00:09:12.067 "w_mbytes_per_sec": 0 00:09:12.067 }, 00:09:12.067 "claimed": false, 00:09:12.067 "zoned": false, 00:09:12.067 "supported_io_types": { 00:09:12.067 "read": true, 00:09:12.067 "write": true, 00:09:12.067 "unmap": true, 00:09:12.067 "flush": true, 00:09:12.067 "reset": true, 00:09:12.067 "nvme_admin": false, 00:09:12.067 "nvme_io": false, 00:09:12.067 "nvme_io_md": false, 00:09:12.067 "write_zeroes": true, 00:09:12.067 "zcopy": true, 00:09:12.067 "get_zone_info": false, 00:09:12.067 "zone_management": false, 00:09:12.067 "zone_append": false, 00:09:12.067 "compare": false, 00:09:12.067 "compare_and_write": false, 00:09:12.067 "abort": true, 00:09:12.067 "seek_hole": false, 00:09:12.067 "seek_data": false, 00:09:12.067 "copy": true, 00:09:12.067 "nvme_iov_md": false 00:09:12.067 }, 00:09:12.067 "memory_domains": [ 00:09:12.067 { 00:09:12.067 "dma_device_id": "system", 00:09:12.067 "dma_device_type": 1 00:09:12.067 }, 00:09:12.067 { 00:09:12.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.067 "dma_device_type": 2 00:09:12.067 } 00:09:12.067 ], 00:09:12.067 "driver_specific": {} 00:09:12.067 } 00:09:12.067 ] 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.067 [2024-10-15 09:08:29.873575] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:12.067 [2024-10-15 09:08:29.873670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:12.067 [2024-10-15 09:08:29.873740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:12.067 [2024-10-15 09:08:29.875759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.067 "name": "Existed_Raid", 00:09:12.067 "uuid": "090595a3-1187-420a-9182-2ed53724a809", 00:09:12.067 "strip_size_kb": 64, 00:09:12.067 "state": "configuring", 00:09:12.067 "raid_level": "concat", 00:09:12.067 "superblock": true, 00:09:12.067 "num_base_bdevs": 3, 00:09:12.067 "num_base_bdevs_discovered": 2, 00:09:12.067 "num_base_bdevs_operational": 3, 00:09:12.067 "base_bdevs_list": [ 00:09:12.067 { 00:09:12.067 "name": "BaseBdev1", 00:09:12.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.067 "is_configured": false, 00:09:12.067 "data_offset": 0, 00:09:12.067 "data_size": 0 00:09:12.067 }, 00:09:12.067 { 00:09:12.067 "name": "BaseBdev2", 00:09:12.067 "uuid": "687d36b4-764e-43cc-9b23-98f0f2fe8010", 00:09:12.067 "is_configured": true, 00:09:12.067 "data_offset": 2048, 00:09:12.067 "data_size": 63488 00:09:12.067 }, 00:09:12.067 { 00:09:12.067 "name": "BaseBdev3", 00:09:12.067 "uuid": "e254dcdf-bb5d-4c83-9779-f1721a6eff0c", 00:09:12.067 "is_configured": true, 00:09:12.067 "data_offset": 2048, 00:09:12.067 "data_size": 63488 00:09:12.067 } 00:09:12.067 ] 00:09:12.067 }' 00:09:12.067 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.068 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.638 09:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:12.638 09:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.638 09:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.638 [2024-10-15 09:08:30.333064] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:12.638 09:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.638 09:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:12.638 09:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.638 09:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.638 09:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.638 09:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.638 09:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.638 09:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.638 09:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.638 09:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.638 09:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.638 09:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.638 09:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.638 09:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.638 09:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.638 09:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.638 09:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.638 "name": "Existed_Raid", 00:09:12.638 "uuid": "090595a3-1187-420a-9182-2ed53724a809", 00:09:12.638 "strip_size_kb": 64, 00:09:12.638 "state": "configuring", 00:09:12.638 "raid_level": "concat", 00:09:12.638 "superblock": true, 00:09:12.638 "num_base_bdevs": 3, 00:09:12.638 "num_base_bdevs_discovered": 1, 00:09:12.638 "num_base_bdevs_operational": 3, 00:09:12.638 "base_bdevs_list": [ 00:09:12.638 { 00:09:12.638 "name": "BaseBdev1", 00:09:12.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.638 "is_configured": false, 00:09:12.638 "data_offset": 0, 00:09:12.638 "data_size": 0 00:09:12.638 }, 00:09:12.638 { 00:09:12.638 "name": null, 00:09:12.638 "uuid": "687d36b4-764e-43cc-9b23-98f0f2fe8010", 00:09:12.638 "is_configured": false, 00:09:12.638 "data_offset": 0, 00:09:12.638 "data_size": 63488 00:09:12.638 }, 00:09:12.638 { 00:09:12.638 "name": "BaseBdev3", 00:09:12.638 "uuid": "e254dcdf-bb5d-4c83-9779-f1721a6eff0c", 00:09:12.638 "is_configured": true, 00:09:12.638 "data_offset": 2048, 00:09:12.638 "data_size": 63488 00:09:12.638 } 00:09:12.638 ] 00:09:12.638 }' 00:09:12.638 09:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.638 09:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.897 09:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.897 09:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.897 09:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.897 09:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:12.897 09:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.159 [2024-10-15 09:08:30.860656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:13.159 BaseBdev1 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.159 [ 00:09:13.159 { 00:09:13.159 "name": "BaseBdev1", 00:09:13.159 "aliases": [ 00:09:13.159 "5d6a03ad-b724-4f11-800c-6c4f0bdfb429" 00:09:13.159 ], 00:09:13.159 "product_name": "Malloc disk", 00:09:13.159 "block_size": 512, 00:09:13.159 "num_blocks": 65536, 00:09:13.159 "uuid": "5d6a03ad-b724-4f11-800c-6c4f0bdfb429", 00:09:13.159 "assigned_rate_limits": { 00:09:13.159 "rw_ios_per_sec": 0, 00:09:13.159 "rw_mbytes_per_sec": 0, 00:09:13.159 "r_mbytes_per_sec": 0, 00:09:13.159 "w_mbytes_per_sec": 0 00:09:13.159 }, 00:09:13.159 "claimed": true, 00:09:13.159 "claim_type": "exclusive_write", 00:09:13.159 "zoned": false, 00:09:13.159 "supported_io_types": { 00:09:13.159 "read": true, 00:09:13.159 "write": true, 00:09:13.159 "unmap": true, 00:09:13.159 "flush": true, 00:09:13.159 "reset": true, 00:09:13.159 "nvme_admin": false, 00:09:13.159 "nvme_io": false, 00:09:13.159 "nvme_io_md": false, 00:09:13.159 "write_zeroes": true, 00:09:13.159 "zcopy": true, 00:09:13.159 "get_zone_info": false, 00:09:13.159 "zone_management": false, 00:09:13.159 "zone_append": false, 00:09:13.159 "compare": false, 00:09:13.159 "compare_and_write": false, 00:09:13.159 "abort": true, 00:09:13.159 "seek_hole": false, 00:09:13.159 "seek_data": false, 00:09:13.159 "copy": true, 00:09:13.159 "nvme_iov_md": false 00:09:13.159 }, 00:09:13.159 "memory_domains": [ 00:09:13.159 { 00:09:13.159 "dma_device_id": "system", 00:09:13.159 "dma_device_type": 1 00:09:13.159 }, 00:09:13.159 { 00:09:13.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.159 "dma_device_type": 2 00:09:13.159 } 00:09:13.159 ], 00:09:13.159 "driver_specific": {} 00:09:13.159 } 00:09:13.159 ] 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.159 "name": "Existed_Raid", 00:09:13.159 "uuid": "090595a3-1187-420a-9182-2ed53724a809", 00:09:13.159 "strip_size_kb": 64, 00:09:13.159 "state": "configuring", 00:09:13.159 "raid_level": "concat", 00:09:13.159 "superblock": true, 00:09:13.159 "num_base_bdevs": 3, 00:09:13.159 "num_base_bdevs_discovered": 2, 00:09:13.159 "num_base_bdevs_operational": 3, 00:09:13.159 "base_bdevs_list": [ 00:09:13.159 { 00:09:13.159 "name": "BaseBdev1", 00:09:13.159 "uuid": "5d6a03ad-b724-4f11-800c-6c4f0bdfb429", 00:09:13.159 "is_configured": true, 00:09:13.159 "data_offset": 2048, 00:09:13.159 "data_size": 63488 00:09:13.159 }, 00:09:13.159 { 00:09:13.159 "name": null, 00:09:13.159 "uuid": "687d36b4-764e-43cc-9b23-98f0f2fe8010", 00:09:13.159 "is_configured": false, 00:09:13.159 "data_offset": 0, 00:09:13.159 "data_size": 63488 00:09:13.159 }, 00:09:13.159 { 00:09:13.159 "name": "BaseBdev3", 00:09:13.159 "uuid": "e254dcdf-bb5d-4c83-9779-f1721a6eff0c", 00:09:13.159 "is_configured": true, 00:09:13.159 "data_offset": 2048, 00:09:13.159 "data_size": 63488 00:09:13.159 } 00:09:13.159 ] 00:09:13.159 }' 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.159 09:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.728 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.728 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.728 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.728 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:13.728 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.728 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:13.728 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:13.728 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.728 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.728 [2024-10-15 09:08:31.411854] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:13.728 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.728 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:13.728 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.728 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.728 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:13.728 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.728 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.728 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.728 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.729 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.729 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.729 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.729 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.729 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.729 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.729 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.729 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.729 "name": "Existed_Raid", 00:09:13.729 "uuid": "090595a3-1187-420a-9182-2ed53724a809", 00:09:13.729 "strip_size_kb": 64, 00:09:13.729 "state": "configuring", 00:09:13.729 "raid_level": "concat", 00:09:13.729 "superblock": true, 00:09:13.729 "num_base_bdevs": 3, 00:09:13.729 "num_base_bdevs_discovered": 1, 00:09:13.729 "num_base_bdevs_operational": 3, 00:09:13.729 "base_bdevs_list": [ 00:09:13.729 { 00:09:13.729 "name": "BaseBdev1", 00:09:13.729 "uuid": "5d6a03ad-b724-4f11-800c-6c4f0bdfb429", 00:09:13.729 "is_configured": true, 00:09:13.729 "data_offset": 2048, 00:09:13.729 "data_size": 63488 00:09:13.729 }, 00:09:13.729 { 00:09:13.729 "name": null, 00:09:13.729 "uuid": "687d36b4-764e-43cc-9b23-98f0f2fe8010", 00:09:13.729 "is_configured": false, 00:09:13.729 "data_offset": 0, 00:09:13.729 "data_size": 63488 00:09:13.729 }, 00:09:13.729 { 00:09:13.729 "name": null, 00:09:13.729 "uuid": "e254dcdf-bb5d-4c83-9779-f1721a6eff0c", 00:09:13.729 "is_configured": false, 00:09:13.729 "data_offset": 0, 00:09:13.729 "data_size": 63488 00:09:13.729 } 00:09:13.729 ] 00:09:13.729 }' 00:09:13.729 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.729 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.297 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:14.297 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.297 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.297 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.297 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.297 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:14.297 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:14.297 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.297 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.297 [2024-10-15 09:08:31.926996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:14.297 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.297 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:14.297 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.297 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.297 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.297 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.297 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.297 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.297 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.297 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.297 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.297 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.297 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.297 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.297 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.297 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.297 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.297 "name": "Existed_Raid", 00:09:14.297 "uuid": "090595a3-1187-420a-9182-2ed53724a809", 00:09:14.297 "strip_size_kb": 64, 00:09:14.297 "state": "configuring", 00:09:14.297 "raid_level": "concat", 00:09:14.297 "superblock": true, 00:09:14.297 "num_base_bdevs": 3, 00:09:14.298 "num_base_bdevs_discovered": 2, 00:09:14.298 "num_base_bdevs_operational": 3, 00:09:14.298 "base_bdevs_list": [ 00:09:14.298 { 00:09:14.298 "name": "BaseBdev1", 00:09:14.298 "uuid": "5d6a03ad-b724-4f11-800c-6c4f0bdfb429", 00:09:14.298 "is_configured": true, 00:09:14.298 "data_offset": 2048, 00:09:14.298 "data_size": 63488 00:09:14.298 }, 00:09:14.298 { 00:09:14.298 "name": null, 00:09:14.298 "uuid": "687d36b4-764e-43cc-9b23-98f0f2fe8010", 00:09:14.298 "is_configured": false, 00:09:14.298 "data_offset": 0, 00:09:14.298 "data_size": 63488 00:09:14.298 }, 00:09:14.298 { 00:09:14.298 "name": "BaseBdev3", 00:09:14.298 "uuid": "e254dcdf-bb5d-4c83-9779-f1721a6eff0c", 00:09:14.298 "is_configured": true, 00:09:14.298 "data_offset": 2048, 00:09:14.298 "data_size": 63488 00:09:14.298 } 00:09:14.298 ] 00:09:14.298 }' 00:09:14.298 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.298 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.557 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:14.557 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.557 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.557 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.557 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.557 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:14.557 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:14.557 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.557 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.557 [2024-10-15 09:08:32.410194] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:14.817 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.817 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:14.817 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.817 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.817 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.817 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.817 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.817 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.817 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.817 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.817 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.817 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.817 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.817 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.817 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.817 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.817 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.817 "name": "Existed_Raid", 00:09:14.817 "uuid": "090595a3-1187-420a-9182-2ed53724a809", 00:09:14.817 "strip_size_kb": 64, 00:09:14.817 "state": "configuring", 00:09:14.817 "raid_level": "concat", 00:09:14.817 "superblock": true, 00:09:14.817 "num_base_bdevs": 3, 00:09:14.817 "num_base_bdevs_discovered": 1, 00:09:14.817 "num_base_bdevs_operational": 3, 00:09:14.817 "base_bdevs_list": [ 00:09:14.817 { 00:09:14.817 "name": null, 00:09:14.817 "uuid": "5d6a03ad-b724-4f11-800c-6c4f0bdfb429", 00:09:14.817 "is_configured": false, 00:09:14.817 "data_offset": 0, 00:09:14.817 "data_size": 63488 00:09:14.817 }, 00:09:14.817 { 00:09:14.817 "name": null, 00:09:14.817 "uuid": "687d36b4-764e-43cc-9b23-98f0f2fe8010", 00:09:14.817 "is_configured": false, 00:09:14.817 "data_offset": 0, 00:09:14.817 "data_size": 63488 00:09:14.817 }, 00:09:14.817 { 00:09:14.817 "name": "BaseBdev3", 00:09:14.817 "uuid": "e254dcdf-bb5d-4c83-9779-f1721a6eff0c", 00:09:14.817 "is_configured": true, 00:09:14.817 "data_offset": 2048, 00:09:14.817 "data_size": 63488 00:09:14.817 } 00:09:14.817 ] 00:09:14.817 }' 00:09:14.817 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.817 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.387 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.387 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:15.387 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.387 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.387 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.387 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:15.387 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:15.387 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.387 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.387 [2024-10-15 09:08:33.066587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:15.387 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.387 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:15.387 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.387 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.387 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.387 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.387 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.387 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.387 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.387 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.387 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.387 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.388 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.388 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.388 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.388 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.388 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.388 "name": "Existed_Raid", 00:09:15.388 "uuid": "090595a3-1187-420a-9182-2ed53724a809", 00:09:15.388 "strip_size_kb": 64, 00:09:15.388 "state": "configuring", 00:09:15.388 "raid_level": "concat", 00:09:15.388 "superblock": true, 00:09:15.388 "num_base_bdevs": 3, 00:09:15.388 "num_base_bdevs_discovered": 2, 00:09:15.388 "num_base_bdevs_operational": 3, 00:09:15.388 "base_bdevs_list": [ 00:09:15.388 { 00:09:15.388 "name": null, 00:09:15.388 "uuid": "5d6a03ad-b724-4f11-800c-6c4f0bdfb429", 00:09:15.388 "is_configured": false, 00:09:15.388 "data_offset": 0, 00:09:15.388 "data_size": 63488 00:09:15.388 }, 00:09:15.388 { 00:09:15.388 "name": "BaseBdev2", 00:09:15.388 "uuid": "687d36b4-764e-43cc-9b23-98f0f2fe8010", 00:09:15.388 "is_configured": true, 00:09:15.388 "data_offset": 2048, 00:09:15.388 "data_size": 63488 00:09:15.388 }, 00:09:15.388 { 00:09:15.388 "name": "BaseBdev3", 00:09:15.388 "uuid": "e254dcdf-bb5d-4c83-9779-f1721a6eff0c", 00:09:15.388 "is_configured": true, 00:09:15.388 "data_offset": 2048, 00:09:15.388 "data_size": 63488 00:09:15.388 } 00:09:15.388 ] 00:09:15.388 }' 00:09:15.388 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.388 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.650 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.650 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:15.650 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.650 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.650 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.650 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:15.650 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.650 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:15.650 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.650 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.911 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.911 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5d6a03ad-b724-4f11-800c-6c4f0bdfb429 00:09:15.911 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.911 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.911 [2024-10-15 09:08:33.629960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:15.911 [2024-10-15 09:08:33.630246] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:15.911 [2024-10-15 09:08:33.630265] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:15.911 [2024-10-15 09:08:33.630550] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:15.911 NewBaseBdev 00:09:15.911 [2024-10-15 09:08:33.630729] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:15.911 [2024-10-15 09:08:33.630741] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:15.911 [2024-10-15 09:08:33.630914] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.911 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.911 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:15.911 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:15.911 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:15.911 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:15.911 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:15.911 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:15.911 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:15.911 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.911 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.911 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.911 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:15.911 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.911 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.911 [ 00:09:15.911 { 00:09:15.911 "name": "NewBaseBdev", 00:09:15.911 "aliases": [ 00:09:15.911 "5d6a03ad-b724-4f11-800c-6c4f0bdfb429" 00:09:15.911 ], 00:09:15.911 "product_name": "Malloc disk", 00:09:15.911 "block_size": 512, 00:09:15.911 "num_blocks": 65536, 00:09:15.911 "uuid": "5d6a03ad-b724-4f11-800c-6c4f0bdfb429", 00:09:15.911 "assigned_rate_limits": { 00:09:15.911 "rw_ios_per_sec": 0, 00:09:15.911 "rw_mbytes_per_sec": 0, 00:09:15.911 "r_mbytes_per_sec": 0, 00:09:15.911 "w_mbytes_per_sec": 0 00:09:15.911 }, 00:09:15.911 "claimed": true, 00:09:15.911 "claim_type": "exclusive_write", 00:09:15.911 "zoned": false, 00:09:15.911 "supported_io_types": { 00:09:15.911 "read": true, 00:09:15.911 "write": true, 00:09:15.911 "unmap": true, 00:09:15.911 "flush": true, 00:09:15.911 "reset": true, 00:09:15.911 "nvme_admin": false, 00:09:15.911 "nvme_io": false, 00:09:15.911 "nvme_io_md": false, 00:09:15.911 "write_zeroes": true, 00:09:15.911 "zcopy": true, 00:09:15.911 "get_zone_info": false, 00:09:15.911 "zone_management": false, 00:09:15.912 "zone_append": false, 00:09:15.912 "compare": false, 00:09:15.912 "compare_and_write": false, 00:09:15.912 "abort": true, 00:09:15.912 "seek_hole": false, 00:09:15.912 "seek_data": false, 00:09:15.912 "copy": true, 00:09:15.912 "nvme_iov_md": false 00:09:15.912 }, 00:09:15.912 "memory_domains": [ 00:09:15.912 { 00:09:15.912 "dma_device_id": "system", 00:09:15.912 "dma_device_type": 1 00:09:15.912 }, 00:09:15.912 { 00:09:15.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.912 "dma_device_type": 2 00:09:15.912 } 00:09:15.912 ], 00:09:15.912 "driver_specific": {} 00:09:15.912 } 00:09:15.912 ] 00:09:15.912 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.912 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:15.912 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:15.912 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.912 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.912 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.912 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.912 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.912 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.912 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.912 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.912 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.912 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.912 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.912 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.912 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.912 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.912 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.912 "name": "Existed_Raid", 00:09:15.912 "uuid": "090595a3-1187-420a-9182-2ed53724a809", 00:09:15.912 "strip_size_kb": 64, 00:09:15.912 "state": "online", 00:09:15.912 "raid_level": "concat", 00:09:15.912 "superblock": true, 00:09:15.912 "num_base_bdevs": 3, 00:09:15.912 "num_base_bdevs_discovered": 3, 00:09:15.912 "num_base_bdevs_operational": 3, 00:09:15.912 "base_bdevs_list": [ 00:09:15.912 { 00:09:15.912 "name": "NewBaseBdev", 00:09:15.912 "uuid": "5d6a03ad-b724-4f11-800c-6c4f0bdfb429", 00:09:15.912 "is_configured": true, 00:09:15.912 "data_offset": 2048, 00:09:15.912 "data_size": 63488 00:09:15.912 }, 00:09:15.912 { 00:09:15.912 "name": "BaseBdev2", 00:09:15.912 "uuid": "687d36b4-764e-43cc-9b23-98f0f2fe8010", 00:09:15.912 "is_configured": true, 00:09:15.912 "data_offset": 2048, 00:09:15.912 "data_size": 63488 00:09:15.912 }, 00:09:15.912 { 00:09:15.912 "name": "BaseBdev3", 00:09:15.912 "uuid": "e254dcdf-bb5d-4c83-9779-f1721a6eff0c", 00:09:15.912 "is_configured": true, 00:09:15.912 "data_offset": 2048, 00:09:15.912 "data_size": 63488 00:09:15.912 } 00:09:15.912 ] 00:09:15.912 }' 00:09:15.912 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.912 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.481 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:16.481 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:16.481 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:16.481 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:16.481 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:16.482 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:16.482 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:16.482 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:16.482 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.482 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.482 [2024-10-15 09:08:34.145523] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.482 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.482 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:16.482 "name": "Existed_Raid", 00:09:16.482 "aliases": [ 00:09:16.482 "090595a3-1187-420a-9182-2ed53724a809" 00:09:16.482 ], 00:09:16.482 "product_name": "Raid Volume", 00:09:16.482 "block_size": 512, 00:09:16.482 "num_blocks": 190464, 00:09:16.482 "uuid": "090595a3-1187-420a-9182-2ed53724a809", 00:09:16.482 "assigned_rate_limits": { 00:09:16.482 "rw_ios_per_sec": 0, 00:09:16.482 "rw_mbytes_per_sec": 0, 00:09:16.482 "r_mbytes_per_sec": 0, 00:09:16.482 "w_mbytes_per_sec": 0 00:09:16.482 }, 00:09:16.482 "claimed": false, 00:09:16.482 "zoned": false, 00:09:16.482 "supported_io_types": { 00:09:16.482 "read": true, 00:09:16.482 "write": true, 00:09:16.482 "unmap": true, 00:09:16.482 "flush": true, 00:09:16.482 "reset": true, 00:09:16.482 "nvme_admin": false, 00:09:16.482 "nvme_io": false, 00:09:16.482 "nvme_io_md": false, 00:09:16.482 "write_zeroes": true, 00:09:16.482 "zcopy": false, 00:09:16.482 "get_zone_info": false, 00:09:16.482 "zone_management": false, 00:09:16.482 "zone_append": false, 00:09:16.482 "compare": false, 00:09:16.482 "compare_and_write": false, 00:09:16.482 "abort": false, 00:09:16.482 "seek_hole": false, 00:09:16.482 "seek_data": false, 00:09:16.482 "copy": false, 00:09:16.482 "nvme_iov_md": false 00:09:16.482 }, 00:09:16.482 "memory_domains": [ 00:09:16.482 { 00:09:16.482 "dma_device_id": "system", 00:09:16.482 "dma_device_type": 1 00:09:16.482 }, 00:09:16.482 { 00:09:16.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.482 "dma_device_type": 2 00:09:16.482 }, 00:09:16.482 { 00:09:16.482 "dma_device_id": "system", 00:09:16.482 "dma_device_type": 1 00:09:16.482 }, 00:09:16.482 { 00:09:16.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.482 "dma_device_type": 2 00:09:16.482 }, 00:09:16.482 { 00:09:16.482 "dma_device_id": "system", 00:09:16.482 "dma_device_type": 1 00:09:16.482 }, 00:09:16.482 { 00:09:16.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.482 "dma_device_type": 2 00:09:16.482 } 00:09:16.482 ], 00:09:16.482 "driver_specific": { 00:09:16.482 "raid": { 00:09:16.482 "uuid": "090595a3-1187-420a-9182-2ed53724a809", 00:09:16.482 "strip_size_kb": 64, 00:09:16.482 "state": "online", 00:09:16.482 "raid_level": "concat", 00:09:16.482 "superblock": true, 00:09:16.482 "num_base_bdevs": 3, 00:09:16.482 "num_base_bdevs_discovered": 3, 00:09:16.482 "num_base_bdevs_operational": 3, 00:09:16.482 "base_bdevs_list": [ 00:09:16.482 { 00:09:16.482 "name": "NewBaseBdev", 00:09:16.482 "uuid": "5d6a03ad-b724-4f11-800c-6c4f0bdfb429", 00:09:16.482 "is_configured": true, 00:09:16.482 "data_offset": 2048, 00:09:16.482 "data_size": 63488 00:09:16.482 }, 00:09:16.482 { 00:09:16.482 "name": "BaseBdev2", 00:09:16.482 "uuid": "687d36b4-764e-43cc-9b23-98f0f2fe8010", 00:09:16.482 "is_configured": true, 00:09:16.482 "data_offset": 2048, 00:09:16.482 "data_size": 63488 00:09:16.482 }, 00:09:16.482 { 00:09:16.482 "name": "BaseBdev3", 00:09:16.482 "uuid": "e254dcdf-bb5d-4c83-9779-f1721a6eff0c", 00:09:16.482 "is_configured": true, 00:09:16.482 "data_offset": 2048, 00:09:16.482 "data_size": 63488 00:09:16.482 } 00:09:16.482 ] 00:09:16.482 } 00:09:16.482 } 00:09:16.482 }' 00:09:16.482 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:16.482 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:16.482 BaseBdev2 00:09:16.482 BaseBdev3' 00:09:16.482 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.482 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:16.482 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.482 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.482 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:16.482 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.482 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.482 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.482 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.482 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.482 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.482 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:16.482 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.482 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.482 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.482 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.482 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.482 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.482 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.482 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:16.482 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.482 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.482 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.742 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.742 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.742 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.742 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:16.742 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.742 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.742 [2024-10-15 09:08:34.421021] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:16.742 [2024-10-15 09:08:34.421061] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.742 [2024-10-15 09:08:34.421169] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.742 [2024-10-15 09:08:34.421235] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:16.742 [2024-10-15 09:08:34.421249] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:16.742 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.742 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66308 00:09:16.742 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 66308 ']' 00:09:16.742 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 66308 00:09:16.742 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:16.742 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:16.742 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66308 00:09:16.742 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:16.742 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:16.742 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66308' 00:09:16.742 killing process with pid 66308 00:09:16.742 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 66308 00:09:16.742 [2024-10-15 09:08:34.466763] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:16.742 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 66308 00:09:17.002 [2024-10-15 09:08:34.792992] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:18.383 09:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:18.383 00:09:18.383 real 0m10.891s 00:09:18.383 user 0m17.312s 00:09:18.383 sys 0m1.939s 00:09:18.383 09:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:18.383 09:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.383 ************************************ 00:09:18.383 END TEST raid_state_function_test_sb 00:09:18.383 ************************************ 00:09:18.383 09:08:35 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:18.383 09:08:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:18.383 09:08:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:18.383 09:08:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:18.383 ************************************ 00:09:18.383 START TEST raid_superblock_test 00:09:18.383 ************************************ 00:09:18.383 09:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:09:18.383 09:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:18.383 09:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:18.383 09:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:18.383 09:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:18.383 09:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:18.383 09:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:18.383 09:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:18.383 09:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:18.384 09:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:18.384 09:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:18.384 09:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:18.384 09:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:18.384 09:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:18.384 09:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:18.384 09:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:18.384 09:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:18.384 09:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66928 00:09:18.384 09:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:18.384 09:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66928 00:09:18.384 09:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 66928 ']' 00:09:18.384 09:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.384 09:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:18.384 09:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.384 09:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:18.384 09:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.384 [2024-10-15 09:08:36.109958] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:09:18.384 [2024-10-15 09:08:36.110162] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66928 ] 00:09:18.384 [2024-10-15 09:08:36.277864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.644 [2024-10-15 09:08:36.417336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.933 [2024-10-15 09:08:36.638357] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.933 [2024-10-15 09:08:36.638427] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:19.192 09:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:19.192 09:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:19.192 09:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:19.192 09:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:19.192 09:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:19.192 09:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:19.192 09:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:19.192 09:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:19.192 09:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:19.192 09:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:19.192 09:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:19.192 09:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.192 09:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.192 malloc1 00:09:19.192 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.192 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:19.192 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.192 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.192 [2024-10-15 09:08:37.043050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:19.192 [2024-10-15 09:08:37.043186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.192 [2024-10-15 09:08:37.043231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:19.192 [2024-10-15 09:08:37.043265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.192 [2024-10-15 09:08:37.045389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.192 [2024-10-15 09:08:37.045465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:19.192 pt1 00:09:19.192 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.192 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:19.192 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:19.193 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:19.193 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:19.193 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:19.193 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:19.193 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:19.193 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:19.193 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:19.193 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.193 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.451 malloc2 00:09:19.451 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.451 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:19.451 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.451 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.451 [2024-10-15 09:08:37.104920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:19.451 [2024-10-15 09:08:37.105035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.451 [2024-10-15 09:08:37.105080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:19.451 [2024-10-15 09:08:37.105114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.451 [2024-10-15 09:08:37.107432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.451 [2024-10-15 09:08:37.107503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:19.451 pt2 00:09:19.451 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.451 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:19.451 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:19.451 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:19.451 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:19.451 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:19.451 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:19.451 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:19.451 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:19.451 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:19.451 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.451 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.451 malloc3 00:09:19.451 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.451 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:19.451 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.451 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.451 [2024-10-15 09:08:37.178996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:19.451 [2024-10-15 09:08:37.179109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.451 [2024-10-15 09:08:37.179169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:19.451 [2024-10-15 09:08:37.179205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.451 [2024-10-15 09:08:37.181661] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.451 [2024-10-15 09:08:37.181782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:19.451 pt3 00:09:19.451 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.451 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:19.451 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:19.452 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:19.452 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.452 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.452 [2024-10-15 09:08:37.191041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:19.452 [2024-10-15 09:08:37.193247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:19.452 [2024-10-15 09:08:37.193383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:19.452 [2024-10-15 09:08:37.193618] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:19.452 [2024-10-15 09:08:37.193694] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:19.452 [2024-10-15 09:08:37.194036] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:19.452 [2024-10-15 09:08:37.194279] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:19.452 [2024-10-15 09:08:37.194331] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:19.452 [2024-10-15 09:08:37.194540] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.452 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.452 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:19.452 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.452 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.452 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.452 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.452 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.452 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.452 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.452 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.452 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.452 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.452 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.452 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.452 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.452 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.452 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.452 "name": "raid_bdev1", 00:09:19.452 "uuid": "48ab75ed-e830-4702-87c8-687f9107aa2c", 00:09:19.452 "strip_size_kb": 64, 00:09:19.452 "state": "online", 00:09:19.452 "raid_level": "concat", 00:09:19.452 "superblock": true, 00:09:19.452 "num_base_bdevs": 3, 00:09:19.452 "num_base_bdevs_discovered": 3, 00:09:19.452 "num_base_bdevs_operational": 3, 00:09:19.452 "base_bdevs_list": [ 00:09:19.452 { 00:09:19.452 "name": "pt1", 00:09:19.452 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:19.452 "is_configured": true, 00:09:19.452 "data_offset": 2048, 00:09:19.452 "data_size": 63488 00:09:19.452 }, 00:09:19.452 { 00:09:19.452 "name": "pt2", 00:09:19.452 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:19.452 "is_configured": true, 00:09:19.452 "data_offset": 2048, 00:09:19.452 "data_size": 63488 00:09:19.452 }, 00:09:19.452 { 00:09:19.452 "name": "pt3", 00:09:19.452 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:19.452 "is_configured": true, 00:09:19.452 "data_offset": 2048, 00:09:19.452 "data_size": 63488 00:09:19.452 } 00:09:19.452 ] 00:09:19.452 }' 00:09:19.452 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.452 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.021 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:20.021 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:20.021 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:20.021 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:20.021 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:20.021 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:20.021 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:20.021 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.021 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.021 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:20.021 [2024-10-15 09:08:37.678539] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:20.021 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.021 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:20.021 "name": "raid_bdev1", 00:09:20.021 "aliases": [ 00:09:20.021 "48ab75ed-e830-4702-87c8-687f9107aa2c" 00:09:20.021 ], 00:09:20.021 "product_name": "Raid Volume", 00:09:20.021 "block_size": 512, 00:09:20.021 "num_blocks": 190464, 00:09:20.021 "uuid": "48ab75ed-e830-4702-87c8-687f9107aa2c", 00:09:20.021 "assigned_rate_limits": { 00:09:20.021 "rw_ios_per_sec": 0, 00:09:20.021 "rw_mbytes_per_sec": 0, 00:09:20.021 "r_mbytes_per_sec": 0, 00:09:20.021 "w_mbytes_per_sec": 0 00:09:20.021 }, 00:09:20.021 "claimed": false, 00:09:20.021 "zoned": false, 00:09:20.021 "supported_io_types": { 00:09:20.021 "read": true, 00:09:20.021 "write": true, 00:09:20.021 "unmap": true, 00:09:20.021 "flush": true, 00:09:20.021 "reset": true, 00:09:20.021 "nvme_admin": false, 00:09:20.021 "nvme_io": false, 00:09:20.021 "nvme_io_md": false, 00:09:20.021 "write_zeroes": true, 00:09:20.021 "zcopy": false, 00:09:20.021 "get_zone_info": false, 00:09:20.021 "zone_management": false, 00:09:20.021 "zone_append": false, 00:09:20.021 "compare": false, 00:09:20.021 "compare_and_write": false, 00:09:20.021 "abort": false, 00:09:20.021 "seek_hole": false, 00:09:20.021 "seek_data": false, 00:09:20.021 "copy": false, 00:09:20.021 "nvme_iov_md": false 00:09:20.021 }, 00:09:20.021 "memory_domains": [ 00:09:20.021 { 00:09:20.022 "dma_device_id": "system", 00:09:20.022 "dma_device_type": 1 00:09:20.022 }, 00:09:20.022 { 00:09:20.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.022 "dma_device_type": 2 00:09:20.022 }, 00:09:20.022 { 00:09:20.022 "dma_device_id": "system", 00:09:20.022 "dma_device_type": 1 00:09:20.022 }, 00:09:20.022 { 00:09:20.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.022 "dma_device_type": 2 00:09:20.022 }, 00:09:20.022 { 00:09:20.022 "dma_device_id": "system", 00:09:20.022 "dma_device_type": 1 00:09:20.022 }, 00:09:20.022 { 00:09:20.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.022 "dma_device_type": 2 00:09:20.022 } 00:09:20.022 ], 00:09:20.022 "driver_specific": { 00:09:20.022 "raid": { 00:09:20.022 "uuid": "48ab75ed-e830-4702-87c8-687f9107aa2c", 00:09:20.022 "strip_size_kb": 64, 00:09:20.022 "state": "online", 00:09:20.022 "raid_level": "concat", 00:09:20.022 "superblock": true, 00:09:20.022 "num_base_bdevs": 3, 00:09:20.022 "num_base_bdevs_discovered": 3, 00:09:20.022 "num_base_bdevs_operational": 3, 00:09:20.022 "base_bdevs_list": [ 00:09:20.022 { 00:09:20.022 "name": "pt1", 00:09:20.022 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:20.022 "is_configured": true, 00:09:20.022 "data_offset": 2048, 00:09:20.022 "data_size": 63488 00:09:20.022 }, 00:09:20.022 { 00:09:20.022 "name": "pt2", 00:09:20.022 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.022 "is_configured": true, 00:09:20.022 "data_offset": 2048, 00:09:20.022 "data_size": 63488 00:09:20.022 }, 00:09:20.022 { 00:09:20.022 "name": "pt3", 00:09:20.022 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:20.022 "is_configured": true, 00:09:20.022 "data_offset": 2048, 00:09:20.022 "data_size": 63488 00:09:20.022 } 00:09:20.022 ] 00:09:20.022 } 00:09:20.022 } 00:09:20.022 }' 00:09:20.022 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:20.022 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:20.022 pt2 00:09:20.022 pt3' 00:09:20.022 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.022 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:20.022 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.022 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:20.022 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.022 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.022 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.022 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.022 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.022 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.022 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.022 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:20.022 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.022 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.022 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.022 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.022 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.022 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.022 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.022 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:20.022 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.022 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.022 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.282 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.282 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.282 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.282 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:20.282 09:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:20.282 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.282 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.282 [2024-10-15 09:08:37.970050] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:20.282 09:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=48ab75ed-e830-4702-87c8-687f9107aa2c 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 48ab75ed-e830-4702-87c8-687f9107aa2c ']' 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.282 [2024-10-15 09:08:38.013617] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:20.282 [2024-10-15 09:08:38.013655] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:20.282 [2024-10-15 09:08:38.013776] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.282 [2024-10-15 09:08:38.013851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:20.282 [2024-10-15 09:08:38.013863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.282 [2024-10-15 09:08:38.165386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:20.282 [2024-10-15 09:08:38.167388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:20.282 [2024-10-15 09:08:38.167492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:20.282 [2024-10-15 09:08:38.167579] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:20.282 [2024-10-15 09:08:38.167677] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:20.282 [2024-10-15 09:08:38.167747] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:20.282 [2024-10-15 09:08:38.167812] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:20.282 [2024-10-15 09:08:38.167871] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:20.282 request: 00:09:20.282 { 00:09:20.282 "name": "raid_bdev1", 00:09:20.282 "raid_level": "concat", 00:09:20.282 "base_bdevs": [ 00:09:20.282 "malloc1", 00:09:20.282 "malloc2", 00:09:20.282 "malloc3" 00:09:20.282 ], 00:09:20.282 "strip_size_kb": 64, 00:09:20.282 "superblock": false, 00:09:20.282 "method": "bdev_raid_create", 00:09:20.282 "req_id": 1 00:09:20.282 } 00:09:20.282 Got JSON-RPC error response 00:09:20.282 response: 00:09:20.282 { 00:09:20.282 "code": -17, 00:09:20.282 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:20.282 } 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:20.282 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:20.542 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.542 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:20.542 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.542 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.542 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.542 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:20.542 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:20.542 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:20.542 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.542 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.542 [2024-10-15 09:08:38.241229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:20.542 [2024-10-15 09:08:38.241362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.542 [2024-10-15 09:08:38.241407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:20.542 [2024-10-15 09:08:38.241455] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.542 [2024-10-15 09:08:38.243878] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.542 [2024-10-15 09:08:38.243953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:20.542 [2024-10-15 09:08:38.244070] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:20.542 [2024-10-15 09:08:38.244159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:20.542 pt1 00:09:20.542 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.542 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:20.542 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.542 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.542 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.542 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.542 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.542 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.542 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.542 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.542 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.542 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.542 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.542 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.542 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.542 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.542 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.542 "name": "raid_bdev1", 00:09:20.542 "uuid": "48ab75ed-e830-4702-87c8-687f9107aa2c", 00:09:20.542 "strip_size_kb": 64, 00:09:20.542 "state": "configuring", 00:09:20.542 "raid_level": "concat", 00:09:20.542 "superblock": true, 00:09:20.542 "num_base_bdevs": 3, 00:09:20.542 "num_base_bdevs_discovered": 1, 00:09:20.542 "num_base_bdevs_operational": 3, 00:09:20.542 "base_bdevs_list": [ 00:09:20.542 { 00:09:20.542 "name": "pt1", 00:09:20.542 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:20.542 "is_configured": true, 00:09:20.542 "data_offset": 2048, 00:09:20.542 "data_size": 63488 00:09:20.542 }, 00:09:20.542 { 00:09:20.542 "name": null, 00:09:20.542 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.542 "is_configured": false, 00:09:20.542 "data_offset": 2048, 00:09:20.542 "data_size": 63488 00:09:20.542 }, 00:09:20.542 { 00:09:20.542 "name": null, 00:09:20.542 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:20.542 "is_configured": false, 00:09:20.542 "data_offset": 2048, 00:09:20.542 "data_size": 63488 00:09:20.542 } 00:09:20.542 ] 00:09:20.542 }' 00:09:20.542 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.542 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.801 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:20.801 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:20.801 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.801 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.801 [2024-10-15 09:08:38.688463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:20.801 [2024-10-15 09:08:38.688552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.801 [2024-10-15 09:08:38.688578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:20.801 [2024-10-15 09:08:38.688588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.801 [2024-10-15 09:08:38.689113] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.801 [2024-10-15 09:08:38.689141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:20.801 [2024-10-15 09:08:38.689240] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:20.801 [2024-10-15 09:08:38.689265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:20.801 pt2 00:09:20.801 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.801 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:20.801 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.801 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.061 [2024-10-15 09:08:38.696465] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:21.061 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.061 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:21.061 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:21.061 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.061 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:21.061 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.061 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.061 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.061 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.061 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.061 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.061 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.061 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:21.061 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.061 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.061 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.061 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.061 "name": "raid_bdev1", 00:09:21.061 "uuid": "48ab75ed-e830-4702-87c8-687f9107aa2c", 00:09:21.061 "strip_size_kb": 64, 00:09:21.061 "state": "configuring", 00:09:21.061 "raid_level": "concat", 00:09:21.061 "superblock": true, 00:09:21.061 "num_base_bdevs": 3, 00:09:21.061 "num_base_bdevs_discovered": 1, 00:09:21.061 "num_base_bdevs_operational": 3, 00:09:21.061 "base_bdevs_list": [ 00:09:21.061 { 00:09:21.061 "name": "pt1", 00:09:21.061 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:21.061 "is_configured": true, 00:09:21.061 "data_offset": 2048, 00:09:21.061 "data_size": 63488 00:09:21.061 }, 00:09:21.061 { 00:09:21.061 "name": null, 00:09:21.061 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:21.061 "is_configured": false, 00:09:21.061 "data_offset": 0, 00:09:21.061 "data_size": 63488 00:09:21.061 }, 00:09:21.061 { 00:09:21.061 "name": null, 00:09:21.061 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:21.061 "is_configured": false, 00:09:21.061 "data_offset": 2048, 00:09:21.061 "data_size": 63488 00:09:21.061 } 00:09:21.061 ] 00:09:21.061 }' 00:09:21.061 09:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.061 09:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.320 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:21.320 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:21.320 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:21.320 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.320 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.320 [2024-10-15 09:08:39.155656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:21.320 [2024-10-15 09:08:39.155742] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.320 [2024-10-15 09:08:39.155765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:21.320 [2024-10-15 09:08:39.155777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.320 [2024-10-15 09:08:39.156275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.320 [2024-10-15 09:08:39.156310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:21.320 [2024-10-15 09:08:39.156401] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:21.320 [2024-10-15 09:08:39.156431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:21.320 pt2 00:09:21.320 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.320 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:21.320 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:21.320 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:21.320 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.320 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.320 [2024-10-15 09:08:39.167648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:21.320 [2024-10-15 09:08:39.167781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.320 [2024-10-15 09:08:39.167823] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:21.320 [2024-10-15 09:08:39.167859] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.320 [2024-10-15 09:08:39.168361] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.320 [2024-10-15 09:08:39.168441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:21.320 [2024-10-15 09:08:39.168556] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:21.320 [2024-10-15 09:08:39.168616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:21.320 [2024-10-15 09:08:39.168807] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:21.320 [2024-10-15 09:08:39.168858] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:21.320 [2024-10-15 09:08:39.169240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:21.320 [2024-10-15 09:08:39.169452] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:21.320 [2024-10-15 09:08:39.169497] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:21.320 [2024-10-15 09:08:39.169715] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:21.320 pt3 00:09:21.320 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.320 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:21.320 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:21.321 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:21.321 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:21.321 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:21.321 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:21.321 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.321 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.321 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.321 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.321 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.321 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.321 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.321 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.321 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.321 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:21.321 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.580 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.580 "name": "raid_bdev1", 00:09:21.580 "uuid": "48ab75ed-e830-4702-87c8-687f9107aa2c", 00:09:21.580 "strip_size_kb": 64, 00:09:21.580 "state": "online", 00:09:21.580 "raid_level": "concat", 00:09:21.580 "superblock": true, 00:09:21.580 "num_base_bdevs": 3, 00:09:21.580 "num_base_bdevs_discovered": 3, 00:09:21.580 "num_base_bdevs_operational": 3, 00:09:21.580 "base_bdevs_list": [ 00:09:21.580 { 00:09:21.580 "name": "pt1", 00:09:21.580 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:21.580 "is_configured": true, 00:09:21.580 "data_offset": 2048, 00:09:21.580 "data_size": 63488 00:09:21.580 }, 00:09:21.580 { 00:09:21.580 "name": "pt2", 00:09:21.580 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:21.580 "is_configured": true, 00:09:21.580 "data_offset": 2048, 00:09:21.580 "data_size": 63488 00:09:21.580 }, 00:09:21.580 { 00:09:21.580 "name": "pt3", 00:09:21.580 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:21.580 "is_configured": true, 00:09:21.580 "data_offset": 2048, 00:09:21.580 "data_size": 63488 00:09:21.580 } 00:09:21.580 ] 00:09:21.580 }' 00:09:21.580 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.580 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.840 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:21.840 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:21.840 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:21.840 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:21.840 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:21.840 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:21.840 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:21.840 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:21.840 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.840 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.840 [2024-10-15 09:08:39.623269] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.840 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.840 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:21.840 "name": "raid_bdev1", 00:09:21.840 "aliases": [ 00:09:21.840 "48ab75ed-e830-4702-87c8-687f9107aa2c" 00:09:21.840 ], 00:09:21.840 "product_name": "Raid Volume", 00:09:21.840 "block_size": 512, 00:09:21.840 "num_blocks": 190464, 00:09:21.840 "uuid": "48ab75ed-e830-4702-87c8-687f9107aa2c", 00:09:21.840 "assigned_rate_limits": { 00:09:21.840 "rw_ios_per_sec": 0, 00:09:21.840 "rw_mbytes_per_sec": 0, 00:09:21.840 "r_mbytes_per_sec": 0, 00:09:21.840 "w_mbytes_per_sec": 0 00:09:21.840 }, 00:09:21.840 "claimed": false, 00:09:21.840 "zoned": false, 00:09:21.840 "supported_io_types": { 00:09:21.840 "read": true, 00:09:21.840 "write": true, 00:09:21.840 "unmap": true, 00:09:21.840 "flush": true, 00:09:21.840 "reset": true, 00:09:21.840 "nvme_admin": false, 00:09:21.840 "nvme_io": false, 00:09:21.840 "nvme_io_md": false, 00:09:21.840 "write_zeroes": true, 00:09:21.840 "zcopy": false, 00:09:21.840 "get_zone_info": false, 00:09:21.840 "zone_management": false, 00:09:21.840 "zone_append": false, 00:09:21.840 "compare": false, 00:09:21.840 "compare_and_write": false, 00:09:21.840 "abort": false, 00:09:21.840 "seek_hole": false, 00:09:21.840 "seek_data": false, 00:09:21.840 "copy": false, 00:09:21.840 "nvme_iov_md": false 00:09:21.840 }, 00:09:21.840 "memory_domains": [ 00:09:21.840 { 00:09:21.840 "dma_device_id": "system", 00:09:21.840 "dma_device_type": 1 00:09:21.840 }, 00:09:21.840 { 00:09:21.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.840 "dma_device_type": 2 00:09:21.840 }, 00:09:21.840 { 00:09:21.840 "dma_device_id": "system", 00:09:21.840 "dma_device_type": 1 00:09:21.840 }, 00:09:21.840 { 00:09:21.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.840 "dma_device_type": 2 00:09:21.840 }, 00:09:21.840 { 00:09:21.840 "dma_device_id": "system", 00:09:21.840 "dma_device_type": 1 00:09:21.840 }, 00:09:21.840 { 00:09:21.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.840 "dma_device_type": 2 00:09:21.840 } 00:09:21.840 ], 00:09:21.840 "driver_specific": { 00:09:21.840 "raid": { 00:09:21.840 "uuid": "48ab75ed-e830-4702-87c8-687f9107aa2c", 00:09:21.840 "strip_size_kb": 64, 00:09:21.840 "state": "online", 00:09:21.840 "raid_level": "concat", 00:09:21.840 "superblock": true, 00:09:21.840 "num_base_bdevs": 3, 00:09:21.840 "num_base_bdevs_discovered": 3, 00:09:21.840 "num_base_bdevs_operational": 3, 00:09:21.840 "base_bdevs_list": [ 00:09:21.840 { 00:09:21.840 "name": "pt1", 00:09:21.840 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:21.840 "is_configured": true, 00:09:21.840 "data_offset": 2048, 00:09:21.840 "data_size": 63488 00:09:21.840 }, 00:09:21.840 { 00:09:21.840 "name": "pt2", 00:09:21.840 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:21.840 "is_configured": true, 00:09:21.840 "data_offset": 2048, 00:09:21.840 "data_size": 63488 00:09:21.840 }, 00:09:21.840 { 00:09:21.840 "name": "pt3", 00:09:21.840 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:21.840 "is_configured": true, 00:09:21.840 "data_offset": 2048, 00:09:21.840 "data_size": 63488 00:09:21.840 } 00:09:21.840 ] 00:09:21.840 } 00:09:21.840 } 00:09:21.840 }' 00:09:21.840 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:21.840 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:21.840 pt2 00:09:21.840 pt3' 00:09:21.840 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.100 [2024-10-15 09:08:39.914733] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 48ab75ed-e830-4702-87c8-687f9107aa2c '!=' 48ab75ed-e830-4702-87c8-687f9107aa2c ']' 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66928 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 66928 ']' 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 66928 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:22.100 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66928 00:09:22.360 killing process with pid 66928 00:09:22.360 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:22.360 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:22.360 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66928' 00:09:22.360 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 66928 00:09:22.360 [2024-10-15 09:08:39.998808] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:22.360 [2024-10-15 09:08:39.998928] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:22.360 09:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 66928 00:09:22.360 [2024-10-15 09:08:39.998998] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:22.360 [2024-10-15 09:08:39.999025] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:22.619 [2024-10-15 09:08:40.331875] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:23.999 ************************************ 00:09:23.999 END TEST raid_superblock_test 00:09:23.999 ************************************ 00:09:23.999 09:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:23.999 00:09:23.999 real 0m5.471s 00:09:23.999 user 0m7.847s 00:09:23.999 sys 0m0.954s 00:09:23.999 09:08:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.999 09:08:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.999 09:08:41 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:23.999 09:08:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:23.999 09:08:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.999 09:08:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:23.999 ************************************ 00:09:23.999 START TEST raid_read_error_test 00:09:23.999 ************************************ 00:09:23.999 09:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:09:23.999 09:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:23.999 09:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:23.999 09:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:23.999 09:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:23.999 09:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.999 09:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:23.999 09:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:23.999 09:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.999 09:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:23.999 09:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:23.999 09:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.999 09:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:23.999 09:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:24.000 09:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:24.000 09:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:24.000 09:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:24.000 09:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:24.000 09:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:24.000 09:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:24.000 09:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:24.000 09:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:24.000 09:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:24.000 09:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:24.000 09:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:24.000 09:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:24.000 09:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0Q88tlySjS 00:09:24.000 09:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67192 00:09:24.000 09:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67192 00:09:24.000 09:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:24.000 09:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 67192 ']' 00:09:24.000 09:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.000 09:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:24.000 09:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.000 09:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:24.000 09:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.000 [2024-10-15 09:08:41.650899] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:09:24.000 [2024-10-15 09:08:41.651130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67192 ] 00:09:24.000 [2024-10-15 09:08:41.801214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.260 [2024-10-15 09:08:41.917754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.260 [2024-10-15 09:08:42.126720] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.260 [2024-10-15 09:08:42.126794] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.828 BaseBdev1_malloc 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.828 true 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.828 [2024-10-15 09:08:42.566005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:24.828 [2024-10-15 09:08:42.566067] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.828 [2024-10-15 09:08:42.566091] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:24.828 [2024-10-15 09:08:42.566106] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.828 [2024-10-15 09:08:42.568438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.828 [2024-10-15 09:08:42.568543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:24.828 BaseBdev1 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.828 BaseBdev2_malloc 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.828 true 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.828 [2024-10-15 09:08:42.637368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:24.828 [2024-10-15 09:08:42.637430] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.828 [2024-10-15 09:08:42.637449] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:24.828 [2024-10-15 09:08:42.637461] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.828 [2024-10-15 09:08:42.639879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.828 [2024-10-15 09:08:42.639966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:24.828 BaseBdev2 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.828 BaseBdev3_malloc 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.828 true 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.828 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.828 [2024-10-15 09:08:42.717350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:24.828 [2024-10-15 09:08:42.717404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.828 [2024-10-15 09:08:42.717440] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:24.828 [2024-10-15 09:08:42.717450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.828 [2024-10-15 09:08:42.719773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.828 [2024-10-15 09:08:42.719812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:25.089 BaseBdev3 00:09:25.089 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.089 09:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:25.089 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.089 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.089 [2024-10-15 09:08:42.729420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:25.089 [2024-10-15 09:08:42.731439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:25.089 [2024-10-15 09:08:42.731531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:25.089 [2024-10-15 09:08:42.731767] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:25.089 [2024-10-15 09:08:42.731785] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:25.089 [2024-10-15 09:08:42.732074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:25.089 [2024-10-15 09:08:42.732263] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:25.089 [2024-10-15 09:08:42.732278] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:25.089 [2024-10-15 09:08:42.732451] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:25.089 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.089 09:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:25.089 09:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:25.089 09:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:25.089 09:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.089 09:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.089 09:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.089 09:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.089 09:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.089 09:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.089 09:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.089 09:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.089 09:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:25.089 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.089 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.089 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.089 09:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.089 "name": "raid_bdev1", 00:09:25.089 "uuid": "eaa6efee-c590-4a1a-9019-653b4eb29a35", 00:09:25.089 "strip_size_kb": 64, 00:09:25.089 "state": "online", 00:09:25.089 "raid_level": "concat", 00:09:25.089 "superblock": true, 00:09:25.089 "num_base_bdevs": 3, 00:09:25.089 "num_base_bdevs_discovered": 3, 00:09:25.089 "num_base_bdevs_operational": 3, 00:09:25.089 "base_bdevs_list": [ 00:09:25.089 { 00:09:25.089 "name": "BaseBdev1", 00:09:25.089 "uuid": "8ee35b1f-0d7e-5fa8-af88-922bdb5d5fc8", 00:09:25.089 "is_configured": true, 00:09:25.089 "data_offset": 2048, 00:09:25.089 "data_size": 63488 00:09:25.089 }, 00:09:25.089 { 00:09:25.089 "name": "BaseBdev2", 00:09:25.089 "uuid": "184e2cac-365e-5d1d-b892-b57a1d9f4ab1", 00:09:25.089 "is_configured": true, 00:09:25.089 "data_offset": 2048, 00:09:25.089 "data_size": 63488 00:09:25.090 }, 00:09:25.090 { 00:09:25.090 "name": "BaseBdev3", 00:09:25.090 "uuid": "daad11b6-4c94-5dc2-85e0-c856cc8cdb8a", 00:09:25.090 "is_configured": true, 00:09:25.090 "data_offset": 2048, 00:09:25.090 "data_size": 63488 00:09:25.090 } 00:09:25.090 ] 00:09:25.090 }' 00:09:25.090 09:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.090 09:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.349 09:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:25.349 09:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:25.607 [2024-10-15 09:08:43.301777] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:26.543 09:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:26.543 09:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.543 09:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.543 09:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.543 09:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:26.543 09:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:26.543 09:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:26.543 09:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:26.543 09:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:26.543 09:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.543 09:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:26.543 09:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.543 09:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.543 09:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.543 09:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.543 09:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.543 09:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.543 09:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.543 09:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:26.543 09:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.543 09:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.543 09:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.543 09:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.543 "name": "raid_bdev1", 00:09:26.543 "uuid": "eaa6efee-c590-4a1a-9019-653b4eb29a35", 00:09:26.543 "strip_size_kb": 64, 00:09:26.543 "state": "online", 00:09:26.543 "raid_level": "concat", 00:09:26.543 "superblock": true, 00:09:26.543 "num_base_bdevs": 3, 00:09:26.543 "num_base_bdevs_discovered": 3, 00:09:26.543 "num_base_bdevs_operational": 3, 00:09:26.543 "base_bdevs_list": [ 00:09:26.543 { 00:09:26.543 "name": "BaseBdev1", 00:09:26.543 "uuid": "8ee35b1f-0d7e-5fa8-af88-922bdb5d5fc8", 00:09:26.543 "is_configured": true, 00:09:26.543 "data_offset": 2048, 00:09:26.543 "data_size": 63488 00:09:26.543 }, 00:09:26.543 { 00:09:26.543 "name": "BaseBdev2", 00:09:26.543 "uuid": "184e2cac-365e-5d1d-b892-b57a1d9f4ab1", 00:09:26.543 "is_configured": true, 00:09:26.543 "data_offset": 2048, 00:09:26.543 "data_size": 63488 00:09:26.543 }, 00:09:26.543 { 00:09:26.543 "name": "BaseBdev3", 00:09:26.543 "uuid": "daad11b6-4c94-5dc2-85e0-c856cc8cdb8a", 00:09:26.543 "is_configured": true, 00:09:26.543 "data_offset": 2048, 00:09:26.543 "data_size": 63488 00:09:26.543 } 00:09:26.543 ] 00:09:26.543 }' 00:09:26.543 09:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.543 09:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.801 09:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:26.801 09:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.802 09:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.802 [2024-10-15 09:08:44.666240] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:26.802 [2024-10-15 09:08:44.666274] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:26.802 [2024-10-15 09:08:44.669451] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:26.802 [2024-10-15 09:08:44.669502] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.802 [2024-10-15 09:08:44.669542] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:26.802 [2024-10-15 09:08:44.669552] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:26.802 { 00:09:26.802 "results": [ 00:09:26.802 { 00:09:26.802 "job": "raid_bdev1", 00:09:26.802 "core_mask": "0x1", 00:09:26.802 "workload": "randrw", 00:09:26.802 "percentage": 50, 00:09:26.802 "status": "finished", 00:09:26.802 "queue_depth": 1, 00:09:26.802 "io_size": 131072, 00:09:26.802 "runtime": 1.365208, 00:09:26.802 "iops": 14857.809212955095, 00:09:26.802 "mibps": 1857.2261516193869, 00:09:26.802 "io_failed": 1, 00:09:26.802 "io_timeout": 0, 00:09:26.802 "avg_latency_us": 93.4840975487943, 00:09:26.802 "min_latency_us": 27.276855895196505, 00:09:26.802 "max_latency_us": 1452.380786026201 00:09:26.802 } 00:09:26.802 ], 00:09:26.802 "core_count": 1 00:09:26.802 } 00:09:26.802 09:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.802 09:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67192 00:09:26.802 09:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 67192 ']' 00:09:26.802 09:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 67192 00:09:26.802 09:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:26.802 09:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:26.802 09:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67192 00:09:27.060 09:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:27.060 09:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:27.060 09:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67192' 00:09:27.060 killing process with pid 67192 00:09:27.060 09:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 67192 00:09:27.060 [2024-10-15 09:08:44.704300] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:27.060 09:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 67192 00:09:27.060 [2024-10-15 09:08:44.950534] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:28.437 09:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:28.437 09:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0Q88tlySjS 00:09:28.438 09:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:28.438 09:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:28.438 09:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:28.438 ************************************ 00:09:28.438 END TEST raid_read_error_test 00:09:28.438 ************************************ 00:09:28.438 09:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:28.438 09:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:28.438 09:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:28.438 00:09:28.438 real 0m4.637s 00:09:28.438 user 0m5.513s 00:09:28.438 sys 0m0.575s 00:09:28.438 09:08:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:28.438 09:08:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.438 09:08:46 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:28.438 09:08:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:28.438 09:08:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:28.438 09:08:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:28.438 ************************************ 00:09:28.438 START TEST raid_write_error_test 00:09:28.438 ************************************ 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.59aivlQ1fZ 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67332 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67332 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 67332 ']' 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:28.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:28.438 09:08:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.697 [2024-10-15 09:08:46.360810] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:09:28.697 [2024-10-15 09:08:46.360941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67332 ] 00:09:28.697 [2024-10-15 09:08:46.514842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.956 [2024-10-15 09:08:46.644411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.215 [2024-10-15 09:08:46.875613] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.215 [2024-10-15 09:08:46.875691] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.474 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:29.474 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:29.474 09:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:29.474 09:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:29.474 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.474 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.474 BaseBdev1_malloc 00:09:29.474 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.474 09:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:29.474 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.474 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.474 true 00:09:29.474 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.474 09:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:29.474 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.474 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.474 [2024-10-15 09:08:47.358083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:29.474 [2024-10-15 09:08:47.358155] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.474 [2024-10-15 09:08:47.358178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:29.474 [2024-10-15 09:08:47.358189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.474 [2024-10-15 09:08:47.360415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.474 [2024-10-15 09:08:47.360535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:29.474 BaseBdev1 00:09:29.474 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.474 09:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:29.474 09:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:29.474 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.474 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.734 BaseBdev2_malloc 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.735 true 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.735 [2024-10-15 09:08:47.424540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:29.735 [2024-10-15 09:08:47.424597] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.735 [2024-10-15 09:08:47.424613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:29.735 [2024-10-15 09:08:47.424622] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.735 [2024-10-15 09:08:47.426809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.735 [2024-10-15 09:08:47.426848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:29.735 BaseBdev2 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.735 BaseBdev3_malloc 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.735 true 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.735 [2024-10-15 09:08:47.508939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:29.735 [2024-10-15 09:08:47.509003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.735 [2024-10-15 09:08:47.509027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:29.735 [2024-10-15 09:08:47.509040] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.735 [2024-10-15 09:08:47.511673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.735 [2024-10-15 09:08:47.511732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:29.735 BaseBdev3 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.735 [2024-10-15 09:08:47.521057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:29.735 [2024-10-15 09:08:47.523815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:29.735 [2024-10-15 09:08:47.523957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:29.735 [2024-10-15 09:08:47.524265] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:29.735 [2024-10-15 09:08:47.524289] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:29.735 [2024-10-15 09:08:47.524727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:29.735 [2024-10-15 09:08:47.524988] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:29.735 [2024-10-15 09:08:47.525038] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:29.735 [2024-10-15 09:08:47.525363] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.735 "name": "raid_bdev1", 00:09:29.735 "uuid": "ec73e97c-95f0-44bf-b5d5-6f4987ca9140", 00:09:29.735 "strip_size_kb": 64, 00:09:29.735 "state": "online", 00:09:29.735 "raid_level": "concat", 00:09:29.735 "superblock": true, 00:09:29.735 "num_base_bdevs": 3, 00:09:29.735 "num_base_bdevs_discovered": 3, 00:09:29.735 "num_base_bdevs_operational": 3, 00:09:29.735 "base_bdevs_list": [ 00:09:29.735 { 00:09:29.735 "name": "BaseBdev1", 00:09:29.735 "uuid": "343942bc-a01c-520b-ae5c-1d442ebfcf0a", 00:09:29.735 "is_configured": true, 00:09:29.735 "data_offset": 2048, 00:09:29.735 "data_size": 63488 00:09:29.735 }, 00:09:29.735 { 00:09:29.735 "name": "BaseBdev2", 00:09:29.735 "uuid": "06c50dd2-d829-512d-8261-2230321816d8", 00:09:29.735 "is_configured": true, 00:09:29.735 "data_offset": 2048, 00:09:29.735 "data_size": 63488 00:09:29.735 }, 00:09:29.735 { 00:09:29.735 "name": "BaseBdev3", 00:09:29.735 "uuid": "b83be287-62ce-5613-9d2f-b3d2332eef74", 00:09:29.735 "is_configured": true, 00:09:29.735 "data_offset": 2048, 00:09:29.735 "data_size": 63488 00:09:29.735 } 00:09:29.735 ] 00:09:29.735 }' 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.735 09:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.304 09:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:30.304 09:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:30.304 [2024-10-15 09:08:48.070009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:31.243 09:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:31.243 09:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.243 09:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.243 09:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.243 09:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:31.243 09:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:31.243 09:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:31.243 09:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:31.243 09:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:31.243 09:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:31.243 09:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.243 09:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.243 09:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.243 09:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.243 09:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.243 09:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.243 09:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.243 09:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.243 09:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:31.243 09:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.243 09:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.243 09:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.243 09:08:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.243 "name": "raid_bdev1", 00:09:31.243 "uuid": "ec73e97c-95f0-44bf-b5d5-6f4987ca9140", 00:09:31.243 "strip_size_kb": 64, 00:09:31.243 "state": "online", 00:09:31.243 "raid_level": "concat", 00:09:31.243 "superblock": true, 00:09:31.243 "num_base_bdevs": 3, 00:09:31.243 "num_base_bdevs_discovered": 3, 00:09:31.243 "num_base_bdevs_operational": 3, 00:09:31.243 "base_bdevs_list": [ 00:09:31.243 { 00:09:31.243 "name": "BaseBdev1", 00:09:31.243 "uuid": "343942bc-a01c-520b-ae5c-1d442ebfcf0a", 00:09:31.243 "is_configured": true, 00:09:31.243 "data_offset": 2048, 00:09:31.243 "data_size": 63488 00:09:31.243 }, 00:09:31.243 { 00:09:31.243 "name": "BaseBdev2", 00:09:31.243 "uuid": "06c50dd2-d829-512d-8261-2230321816d8", 00:09:31.243 "is_configured": true, 00:09:31.243 "data_offset": 2048, 00:09:31.243 "data_size": 63488 00:09:31.243 }, 00:09:31.243 { 00:09:31.243 "name": "BaseBdev3", 00:09:31.243 "uuid": "b83be287-62ce-5613-9d2f-b3d2332eef74", 00:09:31.243 "is_configured": true, 00:09:31.243 "data_offset": 2048, 00:09:31.243 "data_size": 63488 00:09:31.243 } 00:09:31.243 ] 00:09:31.243 }' 00:09:31.243 09:08:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.243 09:08:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.814 09:08:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:31.814 09:08:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.814 09:08:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.814 [2024-10-15 09:08:49.458977] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:31.814 [2024-10-15 09:08:49.459084] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:31.814 [2024-10-15 09:08:49.461974] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:31.814 [2024-10-15 09:08:49.462070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.814 [2024-10-15 09:08:49.462127] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:31.814 [2024-10-15 09:08:49.462166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:31.814 { 00:09:31.814 "results": [ 00:09:31.814 { 00:09:31.814 "job": "raid_bdev1", 00:09:31.814 "core_mask": "0x1", 00:09:31.814 "workload": "randrw", 00:09:31.814 "percentage": 50, 00:09:31.814 "status": "finished", 00:09:31.814 "queue_depth": 1, 00:09:31.814 "io_size": 131072, 00:09:31.814 "runtime": 1.389834, 00:09:31.814 "iops": 14469.35389406217, 00:09:31.814 "mibps": 1808.6692367577712, 00:09:31.814 "io_failed": 1, 00:09:31.814 "io_timeout": 0, 00:09:31.814 "avg_latency_us": 95.96068214423053, 00:09:31.814 "min_latency_us": 27.053275109170304, 00:09:31.814 "max_latency_us": 1695.6366812227075 00:09:31.814 } 00:09:31.814 ], 00:09:31.814 "core_count": 1 00:09:31.814 } 00:09:31.814 09:08:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.814 09:08:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67332 00:09:31.814 09:08:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 67332 ']' 00:09:31.814 09:08:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 67332 00:09:31.814 09:08:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:31.814 09:08:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:31.814 09:08:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67332 00:09:31.814 killing process with pid 67332 00:09:31.814 09:08:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:31.814 09:08:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:31.814 09:08:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67332' 00:09:31.814 09:08:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 67332 00:09:31.814 [2024-10-15 09:08:49.509180] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:31.814 09:08:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 67332 00:09:32.074 [2024-10-15 09:08:49.752359] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:33.455 09:08:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:33.455 09:08:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.59aivlQ1fZ 00:09:33.455 09:08:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:33.455 09:08:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:33.455 09:08:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:33.455 09:08:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:33.455 09:08:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:33.455 09:08:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:33.455 00:09:33.455 real 0m4.712s 00:09:33.455 user 0m5.682s 00:09:33.455 sys 0m0.584s 00:09:33.455 09:08:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:33.455 ************************************ 00:09:33.455 END TEST raid_write_error_test 00:09:33.455 ************************************ 00:09:33.455 09:08:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.455 09:08:51 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:33.455 09:08:51 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:33.455 09:08:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:33.455 09:08:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:33.455 09:08:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:33.455 ************************************ 00:09:33.455 START TEST raid_state_function_test 00:09:33.455 ************************************ 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67476 00:09:33.455 Process raid pid: 67476 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67476' 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67476 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 67476 ']' 00:09:33.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:33.455 09:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.455 [2024-10-15 09:08:51.142760] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:09:33.455 [2024-10-15 09:08:51.142896] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.455 [2024-10-15 09:08:51.312622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.715 [2024-10-15 09:08:51.452841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.975 [2024-10-15 09:08:51.667680] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.975 [2024-10-15 09:08:51.667734] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.233 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:34.233 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:34.233 09:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:34.233 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.233 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.233 [2024-10-15 09:08:52.024092] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:34.233 [2024-10-15 09:08:52.024155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:34.233 [2024-10-15 09:08:52.024167] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:34.233 [2024-10-15 09:08:52.024178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:34.233 [2024-10-15 09:08:52.024186] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:34.233 [2024-10-15 09:08:52.024197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:34.233 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.233 09:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:34.233 09:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.233 09:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.233 09:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.233 09:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.233 09:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.233 09:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.233 09:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.233 09:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.233 09:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.233 09:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.233 09:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.233 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.233 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.233 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.233 09:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.233 "name": "Existed_Raid", 00:09:34.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.233 "strip_size_kb": 0, 00:09:34.233 "state": "configuring", 00:09:34.233 "raid_level": "raid1", 00:09:34.233 "superblock": false, 00:09:34.233 "num_base_bdevs": 3, 00:09:34.233 "num_base_bdevs_discovered": 0, 00:09:34.233 "num_base_bdevs_operational": 3, 00:09:34.233 "base_bdevs_list": [ 00:09:34.233 { 00:09:34.233 "name": "BaseBdev1", 00:09:34.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.233 "is_configured": false, 00:09:34.233 "data_offset": 0, 00:09:34.233 "data_size": 0 00:09:34.233 }, 00:09:34.233 { 00:09:34.233 "name": "BaseBdev2", 00:09:34.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.233 "is_configured": false, 00:09:34.233 "data_offset": 0, 00:09:34.233 "data_size": 0 00:09:34.233 }, 00:09:34.233 { 00:09:34.233 "name": "BaseBdev3", 00:09:34.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.233 "is_configured": false, 00:09:34.233 "data_offset": 0, 00:09:34.233 "data_size": 0 00:09:34.233 } 00:09:34.233 ] 00:09:34.233 }' 00:09:34.233 09:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.233 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.801 [2024-10-15 09:08:52.507218] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:34.801 [2024-10-15 09:08:52.507304] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.801 [2024-10-15 09:08:52.515219] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:34.801 [2024-10-15 09:08:52.515303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:34.801 [2024-10-15 09:08:52.515349] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:34.801 [2024-10-15 09:08:52.515375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:34.801 [2024-10-15 09:08:52.515397] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:34.801 [2024-10-15 09:08:52.515422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.801 [2024-10-15 09:08:52.562798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:34.801 BaseBdev1 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.801 [ 00:09:34.801 { 00:09:34.801 "name": "BaseBdev1", 00:09:34.801 "aliases": [ 00:09:34.801 "d299764c-b6e8-4a8a-9316-54488f1f6ade" 00:09:34.801 ], 00:09:34.801 "product_name": "Malloc disk", 00:09:34.801 "block_size": 512, 00:09:34.801 "num_blocks": 65536, 00:09:34.801 "uuid": "d299764c-b6e8-4a8a-9316-54488f1f6ade", 00:09:34.801 "assigned_rate_limits": { 00:09:34.801 "rw_ios_per_sec": 0, 00:09:34.801 "rw_mbytes_per_sec": 0, 00:09:34.801 "r_mbytes_per_sec": 0, 00:09:34.801 "w_mbytes_per_sec": 0 00:09:34.801 }, 00:09:34.801 "claimed": true, 00:09:34.801 "claim_type": "exclusive_write", 00:09:34.801 "zoned": false, 00:09:34.801 "supported_io_types": { 00:09:34.801 "read": true, 00:09:34.801 "write": true, 00:09:34.801 "unmap": true, 00:09:34.801 "flush": true, 00:09:34.801 "reset": true, 00:09:34.801 "nvme_admin": false, 00:09:34.801 "nvme_io": false, 00:09:34.801 "nvme_io_md": false, 00:09:34.801 "write_zeroes": true, 00:09:34.801 "zcopy": true, 00:09:34.801 "get_zone_info": false, 00:09:34.801 "zone_management": false, 00:09:34.801 "zone_append": false, 00:09:34.801 "compare": false, 00:09:34.801 "compare_and_write": false, 00:09:34.801 "abort": true, 00:09:34.801 "seek_hole": false, 00:09:34.801 "seek_data": false, 00:09:34.801 "copy": true, 00:09:34.801 "nvme_iov_md": false 00:09:34.801 }, 00:09:34.801 "memory_domains": [ 00:09:34.801 { 00:09:34.801 "dma_device_id": "system", 00:09:34.801 "dma_device_type": 1 00:09:34.801 }, 00:09:34.801 { 00:09:34.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.801 "dma_device_type": 2 00:09:34.801 } 00:09:34.801 ], 00:09:34.801 "driver_specific": {} 00:09:34.801 } 00:09:34.801 ] 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.801 "name": "Existed_Raid", 00:09:34.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.801 "strip_size_kb": 0, 00:09:34.801 "state": "configuring", 00:09:34.801 "raid_level": "raid1", 00:09:34.801 "superblock": false, 00:09:34.801 "num_base_bdevs": 3, 00:09:34.801 "num_base_bdevs_discovered": 1, 00:09:34.801 "num_base_bdevs_operational": 3, 00:09:34.801 "base_bdevs_list": [ 00:09:34.801 { 00:09:34.801 "name": "BaseBdev1", 00:09:34.801 "uuid": "d299764c-b6e8-4a8a-9316-54488f1f6ade", 00:09:34.801 "is_configured": true, 00:09:34.801 "data_offset": 0, 00:09:34.801 "data_size": 65536 00:09:34.801 }, 00:09:34.801 { 00:09:34.801 "name": "BaseBdev2", 00:09:34.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.801 "is_configured": false, 00:09:34.801 "data_offset": 0, 00:09:34.801 "data_size": 0 00:09:34.801 }, 00:09:34.801 { 00:09:34.801 "name": "BaseBdev3", 00:09:34.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.801 "is_configured": false, 00:09:34.801 "data_offset": 0, 00:09:34.801 "data_size": 0 00:09:34.801 } 00:09:34.801 ] 00:09:34.801 }' 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.801 09:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.370 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:35.370 09:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.370 09:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.370 [2024-10-15 09:08:53.042027] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:35.370 [2024-10-15 09:08:53.042134] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:35.370 09:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.370 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:35.370 09:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.370 09:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.370 [2024-10-15 09:08:53.050036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:35.370 [2024-10-15 09:08:53.051848] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:35.370 [2024-10-15 09:08:53.051891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:35.370 [2024-10-15 09:08:53.051901] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:35.370 [2024-10-15 09:08:53.051910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:35.370 09:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.370 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:35.370 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:35.370 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:35.370 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.370 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.370 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.370 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.370 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.370 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.370 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.370 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.370 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.370 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.370 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.370 09:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.370 09:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.370 09:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.370 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.370 "name": "Existed_Raid", 00:09:35.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.370 "strip_size_kb": 0, 00:09:35.370 "state": "configuring", 00:09:35.370 "raid_level": "raid1", 00:09:35.370 "superblock": false, 00:09:35.370 "num_base_bdevs": 3, 00:09:35.370 "num_base_bdevs_discovered": 1, 00:09:35.370 "num_base_bdevs_operational": 3, 00:09:35.370 "base_bdevs_list": [ 00:09:35.370 { 00:09:35.370 "name": "BaseBdev1", 00:09:35.370 "uuid": "d299764c-b6e8-4a8a-9316-54488f1f6ade", 00:09:35.370 "is_configured": true, 00:09:35.370 "data_offset": 0, 00:09:35.370 "data_size": 65536 00:09:35.370 }, 00:09:35.370 { 00:09:35.370 "name": "BaseBdev2", 00:09:35.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.370 "is_configured": false, 00:09:35.370 "data_offset": 0, 00:09:35.370 "data_size": 0 00:09:35.370 }, 00:09:35.370 { 00:09:35.370 "name": "BaseBdev3", 00:09:35.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.370 "is_configured": false, 00:09:35.370 "data_offset": 0, 00:09:35.370 "data_size": 0 00:09:35.370 } 00:09:35.370 ] 00:09:35.370 }' 00:09:35.370 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.370 09:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.629 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:35.629 09:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.630 09:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.889 [2024-10-15 09:08:53.540654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:35.889 BaseBdev2 00:09:35.889 09:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.889 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:35.889 09:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:35.889 09:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:35.889 09:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:35.889 09:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:35.889 09:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:35.889 09:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:35.889 09:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.889 09:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.889 09:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.889 09:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:35.889 09:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.889 09:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.889 [ 00:09:35.889 { 00:09:35.889 "name": "BaseBdev2", 00:09:35.889 "aliases": [ 00:09:35.889 "c6cb868f-c91a-4264-9220-15a7d1d41745" 00:09:35.889 ], 00:09:35.889 "product_name": "Malloc disk", 00:09:35.889 "block_size": 512, 00:09:35.889 "num_blocks": 65536, 00:09:35.889 "uuid": "c6cb868f-c91a-4264-9220-15a7d1d41745", 00:09:35.889 "assigned_rate_limits": { 00:09:35.889 "rw_ios_per_sec": 0, 00:09:35.889 "rw_mbytes_per_sec": 0, 00:09:35.889 "r_mbytes_per_sec": 0, 00:09:35.889 "w_mbytes_per_sec": 0 00:09:35.889 }, 00:09:35.889 "claimed": true, 00:09:35.889 "claim_type": "exclusive_write", 00:09:35.889 "zoned": false, 00:09:35.889 "supported_io_types": { 00:09:35.889 "read": true, 00:09:35.889 "write": true, 00:09:35.889 "unmap": true, 00:09:35.889 "flush": true, 00:09:35.889 "reset": true, 00:09:35.889 "nvme_admin": false, 00:09:35.889 "nvme_io": false, 00:09:35.889 "nvme_io_md": false, 00:09:35.889 "write_zeroes": true, 00:09:35.889 "zcopy": true, 00:09:35.889 "get_zone_info": false, 00:09:35.889 "zone_management": false, 00:09:35.889 "zone_append": false, 00:09:35.889 "compare": false, 00:09:35.889 "compare_and_write": false, 00:09:35.889 "abort": true, 00:09:35.889 "seek_hole": false, 00:09:35.889 "seek_data": false, 00:09:35.889 "copy": true, 00:09:35.889 "nvme_iov_md": false 00:09:35.889 }, 00:09:35.889 "memory_domains": [ 00:09:35.889 { 00:09:35.889 "dma_device_id": "system", 00:09:35.889 "dma_device_type": 1 00:09:35.889 }, 00:09:35.889 { 00:09:35.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.889 "dma_device_type": 2 00:09:35.889 } 00:09:35.889 ], 00:09:35.889 "driver_specific": {} 00:09:35.889 } 00:09:35.889 ] 00:09:35.889 09:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.889 09:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:35.889 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:35.889 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:35.889 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:35.889 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.889 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.889 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.889 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.889 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.889 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.889 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.889 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.889 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.889 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.889 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.889 09:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.889 09:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.889 09:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.889 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.889 "name": "Existed_Raid", 00:09:35.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.889 "strip_size_kb": 0, 00:09:35.889 "state": "configuring", 00:09:35.889 "raid_level": "raid1", 00:09:35.889 "superblock": false, 00:09:35.889 "num_base_bdevs": 3, 00:09:35.889 "num_base_bdevs_discovered": 2, 00:09:35.889 "num_base_bdevs_operational": 3, 00:09:35.889 "base_bdevs_list": [ 00:09:35.889 { 00:09:35.889 "name": "BaseBdev1", 00:09:35.889 "uuid": "d299764c-b6e8-4a8a-9316-54488f1f6ade", 00:09:35.889 "is_configured": true, 00:09:35.890 "data_offset": 0, 00:09:35.890 "data_size": 65536 00:09:35.890 }, 00:09:35.890 { 00:09:35.890 "name": "BaseBdev2", 00:09:35.890 "uuid": "c6cb868f-c91a-4264-9220-15a7d1d41745", 00:09:35.890 "is_configured": true, 00:09:35.890 "data_offset": 0, 00:09:35.890 "data_size": 65536 00:09:35.890 }, 00:09:35.890 { 00:09:35.890 "name": "BaseBdev3", 00:09:35.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.890 "is_configured": false, 00:09:35.890 "data_offset": 0, 00:09:35.890 "data_size": 0 00:09:35.890 } 00:09:35.890 ] 00:09:35.890 }' 00:09:35.890 09:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.890 09:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.149 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:36.149 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.149 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.408 [2024-10-15 09:08:54.093040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:36.408 [2024-10-15 09:08:54.093190] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:36.408 [2024-10-15 09:08:54.093223] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:36.408 [2024-10-15 09:08:54.093523] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:36.408 [2024-10-15 09:08:54.093749] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:36.408 [2024-10-15 09:08:54.093795] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:36.408 [2024-10-15 09:08:54.094088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.408 BaseBdev3 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.408 [ 00:09:36.408 { 00:09:36.408 "name": "BaseBdev3", 00:09:36.408 "aliases": [ 00:09:36.408 "6aeb810f-c7c6-4966-8c26-dd1a9ee59870" 00:09:36.408 ], 00:09:36.408 "product_name": "Malloc disk", 00:09:36.408 "block_size": 512, 00:09:36.408 "num_blocks": 65536, 00:09:36.408 "uuid": "6aeb810f-c7c6-4966-8c26-dd1a9ee59870", 00:09:36.408 "assigned_rate_limits": { 00:09:36.408 "rw_ios_per_sec": 0, 00:09:36.408 "rw_mbytes_per_sec": 0, 00:09:36.408 "r_mbytes_per_sec": 0, 00:09:36.408 "w_mbytes_per_sec": 0 00:09:36.408 }, 00:09:36.408 "claimed": true, 00:09:36.408 "claim_type": "exclusive_write", 00:09:36.408 "zoned": false, 00:09:36.408 "supported_io_types": { 00:09:36.408 "read": true, 00:09:36.408 "write": true, 00:09:36.408 "unmap": true, 00:09:36.408 "flush": true, 00:09:36.408 "reset": true, 00:09:36.408 "nvme_admin": false, 00:09:36.408 "nvme_io": false, 00:09:36.408 "nvme_io_md": false, 00:09:36.408 "write_zeroes": true, 00:09:36.408 "zcopy": true, 00:09:36.408 "get_zone_info": false, 00:09:36.408 "zone_management": false, 00:09:36.408 "zone_append": false, 00:09:36.408 "compare": false, 00:09:36.408 "compare_and_write": false, 00:09:36.408 "abort": true, 00:09:36.408 "seek_hole": false, 00:09:36.408 "seek_data": false, 00:09:36.408 "copy": true, 00:09:36.408 "nvme_iov_md": false 00:09:36.408 }, 00:09:36.408 "memory_domains": [ 00:09:36.408 { 00:09:36.408 "dma_device_id": "system", 00:09:36.408 "dma_device_type": 1 00:09:36.408 }, 00:09:36.408 { 00:09:36.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.408 "dma_device_type": 2 00:09:36.408 } 00:09:36.408 ], 00:09:36.408 "driver_specific": {} 00:09:36.408 } 00:09:36.408 ] 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.408 "name": "Existed_Raid", 00:09:36.408 "uuid": "38dab271-e1bf-41c8-a01d-2e8bef3754b4", 00:09:36.408 "strip_size_kb": 0, 00:09:36.408 "state": "online", 00:09:36.408 "raid_level": "raid1", 00:09:36.408 "superblock": false, 00:09:36.408 "num_base_bdevs": 3, 00:09:36.408 "num_base_bdevs_discovered": 3, 00:09:36.408 "num_base_bdevs_operational": 3, 00:09:36.408 "base_bdevs_list": [ 00:09:36.408 { 00:09:36.408 "name": "BaseBdev1", 00:09:36.408 "uuid": "d299764c-b6e8-4a8a-9316-54488f1f6ade", 00:09:36.408 "is_configured": true, 00:09:36.408 "data_offset": 0, 00:09:36.408 "data_size": 65536 00:09:36.408 }, 00:09:36.408 { 00:09:36.408 "name": "BaseBdev2", 00:09:36.408 "uuid": "c6cb868f-c91a-4264-9220-15a7d1d41745", 00:09:36.408 "is_configured": true, 00:09:36.408 "data_offset": 0, 00:09:36.408 "data_size": 65536 00:09:36.408 }, 00:09:36.408 { 00:09:36.408 "name": "BaseBdev3", 00:09:36.408 "uuid": "6aeb810f-c7c6-4966-8c26-dd1a9ee59870", 00:09:36.408 "is_configured": true, 00:09:36.408 "data_offset": 0, 00:09:36.408 "data_size": 65536 00:09:36.408 } 00:09:36.408 ] 00:09:36.408 }' 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.408 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.000 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:37.000 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:37.000 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:37.000 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:37.000 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:37.000 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:37.000 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:37.000 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:37.000 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.000 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.000 [2024-10-15 09:08:54.616639] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:37.000 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.000 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:37.000 "name": "Existed_Raid", 00:09:37.000 "aliases": [ 00:09:37.000 "38dab271-e1bf-41c8-a01d-2e8bef3754b4" 00:09:37.000 ], 00:09:37.000 "product_name": "Raid Volume", 00:09:37.000 "block_size": 512, 00:09:37.000 "num_blocks": 65536, 00:09:37.000 "uuid": "38dab271-e1bf-41c8-a01d-2e8bef3754b4", 00:09:37.000 "assigned_rate_limits": { 00:09:37.000 "rw_ios_per_sec": 0, 00:09:37.000 "rw_mbytes_per_sec": 0, 00:09:37.000 "r_mbytes_per_sec": 0, 00:09:37.000 "w_mbytes_per_sec": 0 00:09:37.000 }, 00:09:37.000 "claimed": false, 00:09:37.000 "zoned": false, 00:09:37.000 "supported_io_types": { 00:09:37.000 "read": true, 00:09:37.000 "write": true, 00:09:37.000 "unmap": false, 00:09:37.000 "flush": false, 00:09:37.000 "reset": true, 00:09:37.000 "nvme_admin": false, 00:09:37.000 "nvme_io": false, 00:09:37.000 "nvme_io_md": false, 00:09:37.000 "write_zeroes": true, 00:09:37.000 "zcopy": false, 00:09:37.000 "get_zone_info": false, 00:09:37.000 "zone_management": false, 00:09:37.000 "zone_append": false, 00:09:37.000 "compare": false, 00:09:37.000 "compare_and_write": false, 00:09:37.000 "abort": false, 00:09:37.000 "seek_hole": false, 00:09:37.000 "seek_data": false, 00:09:37.000 "copy": false, 00:09:37.000 "nvme_iov_md": false 00:09:37.000 }, 00:09:37.000 "memory_domains": [ 00:09:37.000 { 00:09:37.000 "dma_device_id": "system", 00:09:37.000 "dma_device_type": 1 00:09:37.000 }, 00:09:37.000 { 00:09:37.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.000 "dma_device_type": 2 00:09:37.000 }, 00:09:37.000 { 00:09:37.000 "dma_device_id": "system", 00:09:37.000 "dma_device_type": 1 00:09:37.000 }, 00:09:37.000 { 00:09:37.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.000 "dma_device_type": 2 00:09:37.000 }, 00:09:37.000 { 00:09:37.000 "dma_device_id": "system", 00:09:37.000 "dma_device_type": 1 00:09:37.000 }, 00:09:37.000 { 00:09:37.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.000 "dma_device_type": 2 00:09:37.000 } 00:09:37.000 ], 00:09:37.000 "driver_specific": { 00:09:37.000 "raid": { 00:09:37.000 "uuid": "38dab271-e1bf-41c8-a01d-2e8bef3754b4", 00:09:37.000 "strip_size_kb": 0, 00:09:37.000 "state": "online", 00:09:37.000 "raid_level": "raid1", 00:09:37.000 "superblock": false, 00:09:37.000 "num_base_bdevs": 3, 00:09:37.000 "num_base_bdevs_discovered": 3, 00:09:37.000 "num_base_bdevs_operational": 3, 00:09:37.000 "base_bdevs_list": [ 00:09:37.000 { 00:09:37.000 "name": "BaseBdev1", 00:09:37.000 "uuid": "d299764c-b6e8-4a8a-9316-54488f1f6ade", 00:09:37.000 "is_configured": true, 00:09:37.000 "data_offset": 0, 00:09:37.000 "data_size": 65536 00:09:37.000 }, 00:09:37.000 { 00:09:37.000 "name": "BaseBdev2", 00:09:37.000 "uuid": "c6cb868f-c91a-4264-9220-15a7d1d41745", 00:09:37.000 "is_configured": true, 00:09:37.000 "data_offset": 0, 00:09:37.000 "data_size": 65536 00:09:37.000 }, 00:09:37.000 { 00:09:37.000 "name": "BaseBdev3", 00:09:37.000 "uuid": "6aeb810f-c7c6-4966-8c26-dd1a9ee59870", 00:09:37.000 "is_configured": true, 00:09:37.000 "data_offset": 0, 00:09:37.000 "data_size": 65536 00:09:37.000 } 00:09:37.000 ] 00:09:37.000 } 00:09:37.000 } 00:09:37.000 }' 00:09:37.000 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:37.000 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:37.000 BaseBdev2 00:09:37.000 BaseBdev3' 00:09:37.001 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.001 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:37.001 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.001 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.001 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:37.001 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.001 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.001 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.001 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.001 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.001 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.001 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.001 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:37.001 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.001 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.001 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.001 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.001 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.001 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.001 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:37.001 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.001 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.001 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.001 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.269 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.269 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.269 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:37.269 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.269 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.269 [2024-10-15 09:08:54.891894] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:37.269 09:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.269 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:37.269 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:37.269 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:37.269 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:37.269 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:37.269 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:37.269 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.269 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:37.269 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.269 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.269 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:37.269 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.269 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.269 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.269 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.269 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.269 09:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.269 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.269 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.269 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.269 09:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.269 "name": "Existed_Raid", 00:09:37.269 "uuid": "38dab271-e1bf-41c8-a01d-2e8bef3754b4", 00:09:37.269 "strip_size_kb": 0, 00:09:37.269 "state": "online", 00:09:37.269 "raid_level": "raid1", 00:09:37.269 "superblock": false, 00:09:37.269 "num_base_bdevs": 3, 00:09:37.269 "num_base_bdevs_discovered": 2, 00:09:37.269 "num_base_bdevs_operational": 2, 00:09:37.269 "base_bdevs_list": [ 00:09:37.269 { 00:09:37.269 "name": null, 00:09:37.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.269 "is_configured": false, 00:09:37.269 "data_offset": 0, 00:09:37.269 "data_size": 65536 00:09:37.269 }, 00:09:37.269 { 00:09:37.269 "name": "BaseBdev2", 00:09:37.269 "uuid": "c6cb868f-c91a-4264-9220-15a7d1d41745", 00:09:37.269 "is_configured": true, 00:09:37.269 "data_offset": 0, 00:09:37.269 "data_size": 65536 00:09:37.269 }, 00:09:37.269 { 00:09:37.269 "name": "BaseBdev3", 00:09:37.269 "uuid": "6aeb810f-c7c6-4966-8c26-dd1a9ee59870", 00:09:37.269 "is_configured": true, 00:09:37.269 "data_offset": 0, 00:09:37.269 "data_size": 65536 00:09:37.269 } 00:09:37.269 ] 00:09:37.269 }' 00:09:37.269 09:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.269 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.837 09:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:37.837 09:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:37.837 09:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.837 09:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:37.837 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.837 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.837 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.837 09:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:37.837 09:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:37.837 09:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:37.837 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.837 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.837 [2024-10-15 09:08:55.565729] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:37.837 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.837 09:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:37.837 09:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:37.837 09:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.837 09:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:37.837 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.837 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.837 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.837 09:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:37.837 09:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.098 [2024-10-15 09:08:55.738013] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:38.098 [2024-10-15 09:08:55.738128] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:38.098 [2024-10-15 09:08:55.840432] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:38.098 [2024-10-15 09:08:55.840499] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:38.098 [2024-10-15 09:08:55.840512] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.098 BaseBdev2 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.098 [ 00:09:38.098 { 00:09:38.098 "name": "BaseBdev2", 00:09:38.098 "aliases": [ 00:09:38.098 "b2844c0c-1eda-4727-8a63-dc63677cee87" 00:09:38.098 ], 00:09:38.098 "product_name": "Malloc disk", 00:09:38.098 "block_size": 512, 00:09:38.098 "num_blocks": 65536, 00:09:38.098 "uuid": "b2844c0c-1eda-4727-8a63-dc63677cee87", 00:09:38.098 "assigned_rate_limits": { 00:09:38.098 "rw_ios_per_sec": 0, 00:09:38.098 "rw_mbytes_per_sec": 0, 00:09:38.098 "r_mbytes_per_sec": 0, 00:09:38.098 "w_mbytes_per_sec": 0 00:09:38.098 }, 00:09:38.098 "claimed": false, 00:09:38.098 "zoned": false, 00:09:38.098 "supported_io_types": { 00:09:38.098 "read": true, 00:09:38.098 "write": true, 00:09:38.098 "unmap": true, 00:09:38.098 "flush": true, 00:09:38.098 "reset": true, 00:09:38.098 "nvme_admin": false, 00:09:38.098 "nvme_io": false, 00:09:38.098 "nvme_io_md": false, 00:09:38.098 "write_zeroes": true, 00:09:38.098 "zcopy": true, 00:09:38.098 "get_zone_info": false, 00:09:38.098 "zone_management": false, 00:09:38.098 "zone_append": false, 00:09:38.098 "compare": false, 00:09:38.098 "compare_and_write": false, 00:09:38.098 "abort": true, 00:09:38.098 "seek_hole": false, 00:09:38.098 "seek_data": false, 00:09:38.098 "copy": true, 00:09:38.098 "nvme_iov_md": false 00:09:38.098 }, 00:09:38.098 "memory_domains": [ 00:09:38.098 { 00:09:38.098 "dma_device_id": "system", 00:09:38.098 "dma_device_type": 1 00:09:38.098 }, 00:09:38.098 { 00:09:38.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.098 "dma_device_type": 2 00:09:38.098 } 00:09:38.098 ], 00:09:38.098 "driver_specific": {} 00:09:38.098 } 00:09:38.098 ] 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.098 09:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.360 BaseBdev3 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.360 [ 00:09:38.360 { 00:09:38.360 "name": "BaseBdev3", 00:09:38.360 "aliases": [ 00:09:38.360 "9f4ff91b-46bc-4cee-933a-079e09433448" 00:09:38.360 ], 00:09:38.360 "product_name": "Malloc disk", 00:09:38.360 "block_size": 512, 00:09:38.360 "num_blocks": 65536, 00:09:38.360 "uuid": "9f4ff91b-46bc-4cee-933a-079e09433448", 00:09:38.360 "assigned_rate_limits": { 00:09:38.360 "rw_ios_per_sec": 0, 00:09:38.360 "rw_mbytes_per_sec": 0, 00:09:38.360 "r_mbytes_per_sec": 0, 00:09:38.360 "w_mbytes_per_sec": 0 00:09:38.360 }, 00:09:38.360 "claimed": false, 00:09:38.360 "zoned": false, 00:09:38.360 "supported_io_types": { 00:09:38.360 "read": true, 00:09:38.360 "write": true, 00:09:38.360 "unmap": true, 00:09:38.360 "flush": true, 00:09:38.360 "reset": true, 00:09:38.360 "nvme_admin": false, 00:09:38.360 "nvme_io": false, 00:09:38.360 "nvme_io_md": false, 00:09:38.360 "write_zeroes": true, 00:09:38.360 "zcopy": true, 00:09:38.360 "get_zone_info": false, 00:09:38.360 "zone_management": false, 00:09:38.360 "zone_append": false, 00:09:38.360 "compare": false, 00:09:38.360 "compare_and_write": false, 00:09:38.360 "abort": true, 00:09:38.360 "seek_hole": false, 00:09:38.360 "seek_data": false, 00:09:38.360 "copy": true, 00:09:38.360 "nvme_iov_md": false 00:09:38.360 }, 00:09:38.360 "memory_domains": [ 00:09:38.360 { 00:09:38.360 "dma_device_id": "system", 00:09:38.360 "dma_device_type": 1 00:09:38.360 }, 00:09:38.360 { 00:09:38.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.360 "dma_device_type": 2 00:09:38.360 } 00:09:38.360 ], 00:09:38.360 "driver_specific": {} 00:09:38.360 } 00:09:38.360 ] 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.360 [2024-10-15 09:08:56.070178] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:38.360 [2024-10-15 09:08:56.070273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:38.360 [2024-10-15 09:08:56.070320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:38.360 [2024-10-15 09:08:56.072220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.360 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.360 "name": "Existed_Raid", 00:09:38.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.360 "strip_size_kb": 0, 00:09:38.360 "state": "configuring", 00:09:38.360 "raid_level": "raid1", 00:09:38.360 "superblock": false, 00:09:38.360 "num_base_bdevs": 3, 00:09:38.361 "num_base_bdevs_discovered": 2, 00:09:38.361 "num_base_bdevs_operational": 3, 00:09:38.361 "base_bdevs_list": [ 00:09:38.361 { 00:09:38.361 "name": "BaseBdev1", 00:09:38.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.361 "is_configured": false, 00:09:38.361 "data_offset": 0, 00:09:38.361 "data_size": 0 00:09:38.361 }, 00:09:38.361 { 00:09:38.361 "name": "BaseBdev2", 00:09:38.361 "uuid": "b2844c0c-1eda-4727-8a63-dc63677cee87", 00:09:38.361 "is_configured": true, 00:09:38.361 "data_offset": 0, 00:09:38.361 "data_size": 65536 00:09:38.361 }, 00:09:38.361 { 00:09:38.361 "name": "BaseBdev3", 00:09:38.361 "uuid": "9f4ff91b-46bc-4cee-933a-079e09433448", 00:09:38.361 "is_configured": true, 00:09:38.361 "data_offset": 0, 00:09:38.361 "data_size": 65536 00:09:38.361 } 00:09:38.361 ] 00:09:38.361 }' 00:09:38.361 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.361 09:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.930 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:38.930 09:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.930 09:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.930 [2024-10-15 09:08:56.541359] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:38.930 09:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.930 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:38.930 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.930 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.930 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.930 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.930 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.930 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.930 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.930 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.930 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.930 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.930 09:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.930 09:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.930 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.930 09:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.930 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.930 "name": "Existed_Raid", 00:09:38.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.930 "strip_size_kb": 0, 00:09:38.930 "state": "configuring", 00:09:38.930 "raid_level": "raid1", 00:09:38.930 "superblock": false, 00:09:38.930 "num_base_bdevs": 3, 00:09:38.930 "num_base_bdevs_discovered": 1, 00:09:38.930 "num_base_bdevs_operational": 3, 00:09:38.930 "base_bdevs_list": [ 00:09:38.930 { 00:09:38.930 "name": "BaseBdev1", 00:09:38.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.930 "is_configured": false, 00:09:38.930 "data_offset": 0, 00:09:38.930 "data_size": 0 00:09:38.930 }, 00:09:38.930 { 00:09:38.930 "name": null, 00:09:38.930 "uuid": "b2844c0c-1eda-4727-8a63-dc63677cee87", 00:09:38.930 "is_configured": false, 00:09:38.930 "data_offset": 0, 00:09:38.930 "data_size": 65536 00:09:38.930 }, 00:09:38.930 { 00:09:38.930 "name": "BaseBdev3", 00:09:38.930 "uuid": "9f4ff91b-46bc-4cee-933a-079e09433448", 00:09:38.930 "is_configured": true, 00:09:38.930 "data_offset": 0, 00:09:38.930 "data_size": 65536 00:09:38.930 } 00:09:38.930 ] 00:09:38.930 }' 00:09:38.930 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.930 09:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.189 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.189 09:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:39.189 09:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.189 09:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.189 09:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.189 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:39.189 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:39.189 09:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.189 09:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.189 [2024-10-15 09:08:57.067030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:39.189 BaseBdev1 00:09:39.189 09:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.189 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:39.189 09:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:39.189 09:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:39.189 09:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:39.189 09:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:39.189 09:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:39.189 09:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:39.189 09:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.189 09:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.189 09:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.189 09:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:39.190 09:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.190 09:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.449 [ 00:09:39.449 { 00:09:39.449 "name": "BaseBdev1", 00:09:39.449 "aliases": [ 00:09:39.449 "3d2e33bc-5b2f-4dea-9912-3186fdf7fcec" 00:09:39.449 ], 00:09:39.449 "product_name": "Malloc disk", 00:09:39.449 "block_size": 512, 00:09:39.449 "num_blocks": 65536, 00:09:39.449 "uuid": "3d2e33bc-5b2f-4dea-9912-3186fdf7fcec", 00:09:39.449 "assigned_rate_limits": { 00:09:39.449 "rw_ios_per_sec": 0, 00:09:39.449 "rw_mbytes_per_sec": 0, 00:09:39.449 "r_mbytes_per_sec": 0, 00:09:39.449 "w_mbytes_per_sec": 0 00:09:39.449 }, 00:09:39.449 "claimed": true, 00:09:39.449 "claim_type": "exclusive_write", 00:09:39.449 "zoned": false, 00:09:39.449 "supported_io_types": { 00:09:39.449 "read": true, 00:09:39.449 "write": true, 00:09:39.449 "unmap": true, 00:09:39.449 "flush": true, 00:09:39.449 "reset": true, 00:09:39.449 "nvme_admin": false, 00:09:39.449 "nvme_io": false, 00:09:39.449 "nvme_io_md": false, 00:09:39.449 "write_zeroes": true, 00:09:39.449 "zcopy": true, 00:09:39.449 "get_zone_info": false, 00:09:39.449 "zone_management": false, 00:09:39.449 "zone_append": false, 00:09:39.449 "compare": false, 00:09:39.449 "compare_and_write": false, 00:09:39.449 "abort": true, 00:09:39.449 "seek_hole": false, 00:09:39.449 "seek_data": false, 00:09:39.449 "copy": true, 00:09:39.449 "nvme_iov_md": false 00:09:39.449 }, 00:09:39.449 "memory_domains": [ 00:09:39.449 { 00:09:39.449 "dma_device_id": "system", 00:09:39.449 "dma_device_type": 1 00:09:39.449 }, 00:09:39.449 { 00:09:39.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.449 "dma_device_type": 2 00:09:39.449 } 00:09:39.449 ], 00:09:39.449 "driver_specific": {} 00:09:39.449 } 00:09:39.449 ] 00:09:39.449 09:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.449 09:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:39.449 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:39.449 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.449 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.449 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.449 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.449 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.449 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.449 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.449 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.449 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.449 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.449 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.449 09:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.449 09:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.449 09:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.449 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.449 "name": "Existed_Raid", 00:09:39.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.450 "strip_size_kb": 0, 00:09:39.450 "state": "configuring", 00:09:39.450 "raid_level": "raid1", 00:09:39.450 "superblock": false, 00:09:39.450 "num_base_bdevs": 3, 00:09:39.450 "num_base_bdevs_discovered": 2, 00:09:39.450 "num_base_bdevs_operational": 3, 00:09:39.450 "base_bdevs_list": [ 00:09:39.450 { 00:09:39.450 "name": "BaseBdev1", 00:09:39.450 "uuid": "3d2e33bc-5b2f-4dea-9912-3186fdf7fcec", 00:09:39.450 "is_configured": true, 00:09:39.450 "data_offset": 0, 00:09:39.450 "data_size": 65536 00:09:39.450 }, 00:09:39.450 { 00:09:39.450 "name": null, 00:09:39.450 "uuid": "b2844c0c-1eda-4727-8a63-dc63677cee87", 00:09:39.450 "is_configured": false, 00:09:39.450 "data_offset": 0, 00:09:39.450 "data_size": 65536 00:09:39.450 }, 00:09:39.450 { 00:09:39.450 "name": "BaseBdev3", 00:09:39.450 "uuid": "9f4ff91b-46bc-4cee-933a-079e09433448", 00:09:39.450 "is_configured": true, 00:09:39.450 "data_offset": 0, 00:09:39.450 "data_size": 65536 00:09:39.450 } 00:09:39.450 ] 00:09:39.450 }' 00:09:39.450 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.450 09:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.709 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.709 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:39.709 09:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.709 09:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.709 09:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.969 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:39.969 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:39.969 09:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.969 09:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.969 [2024-10-15 09:08:57.610192] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:39.969 09:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.969 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:39.969 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.969 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.969 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.969 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.969 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.969 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.969 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.969 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.969 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.969 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.969 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.969 09:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.969 09:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.969 09:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.969 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.969 "name": "Existed_Raid", 00:09:39.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.969 "strip_size_kb": 0, 00:09:39.969 "state": "configuring", 00:09:39.969 "raid_level": "raid1", 00:09:39.969 "superblock": false, 00:09:39.969 "num_base_bdevs": 3, 00:09:39.969 "num_base_bdevs_discovered": 1, 00:09:39.969 "num_base_bdevs_operational": 3, 00:09:39.969 "base_bdevs_list": [ 00:09:39.969 { 00:09:39.969 "name": "BaseBdev1", 00:09:39.969 "uuid": "3d2e33bc-5b2f-4dea-9912-3186fdf7fcec", 00:09:39.969 "is_configured": true, 00:09:39.969 "data_offset": 0, 00:09:39.969 "data_size": 65536 00:09:39.969 }, 00:09:39.969 { 00:09:39.969 "name": null, 00:09:39.969 "uuid": "b2844c0c-1eda-4727-8a63-dc63677cee87", 00:09:39.969 "is_configured": false, 00:09:39.969 "data_offset": 0, 00:09:39.969 "data_size": 65536 00:09:39.969 }, 00:09:39.969 { 00:09:39.969 "name": null, 00:09:39.969 "uuid": "9f4ff91b-46bc-4cee-933a-079e09433448", 00:09:39.969 "is_configured": false, 00:09:39.969 "data_offset": 0, 00:09:39.969 "data_size": 65536 00:09:39.969 } 00:09:39.969 ] 00:09:39.969 }' 00:09:39.969 09:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.969 09:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.231 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:40.231 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.231 09:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.231 09:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.231 09:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.231 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:40.231 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:40.231 09:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.231 09:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.231 [2024-10-15 09:08:58.089557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:40.231 09:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.231 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:40.231 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.231 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.231 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.231 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.231 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.231 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.231 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.231 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.231 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.231 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.231 09:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.231 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.232 09:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.232 09:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.491 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.491 "name": "Existed_Raid", 00:09:40.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.491 "strip_size_kb": 0, 00:09:40.491 "state": "configuring", 00:09:40.491 "raid_level": "raid1", 00:09:40.491 "superblock": false, 00:09:40.491 "num_base_bdevs": 3, 00:09:40.491 "num_base_bdevs_discovered": 2, 00:09:40.491 "num_base_bdevs_operational": 3, 00:09:40.491 "base_bdevs_list": [ 00:09:40.491 { 00:09:40.491 "name": "BaseBdev1", 00:09:40.491 "uuid": "3d2e33bc-5b2f-4dea-9912-3186fdf7fcec", 00:09:40.491 "is_configured": true, 00:09:40.491 "data_offset": 0, 00:09:40.491 "data_size": 65536 00:09:40.491 }, 00:09:40.491 { 00:09:40.491 "name": null, 00:09:40.491 "uuid": "b2844c0c-1eda-4727-8a63-dc63677cee87", 00:09:40.491 "is_configured": false, 00:09:40.491 "data_offset": 0, 00:09:40.491 "data_size": 65536 00:09:40.491 }, 00:09:40.491 { 00:09:40.491 "name": "BaseBdev3", 00:09:40.491 "uuid": "9f4ff91b-46bc-4cee-933a-079e09433448", 00:09:40.491 "is_configured": true, 00:09:40.491 "data_offset": 0, 00:09:40.491 "data_size": 65536 00:09:40.491 } 00:09:40.491 ] 00:09:40.491 }' 00:09:40.491 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.491 09:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.750 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.750 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:40.750 09:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.750 09:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.750 09:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.750 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:40.750 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:40.750 09:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.750 09:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.750 [2024-10-15 09:08:58.612785] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:41.013 09:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.013 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:41.013 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.013 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.013 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.013 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.013 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.013 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.013 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.013 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.013 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.013 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.013 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.013 09:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.013 09:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.013 09:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.013 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.013 "name": "Existed_Raid", 00:09:41.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.013 "strip_size_kb": 0, 00:09:41.013 "state": "configuring", 00:09:41.013 "raid_level": "raid1", 00:09:41.013 "superblock": false, 00:09:41.013 "num_base_bdevs": 3, 00:09:41.013 "num_base_bdevs_discovered": 1, 00:09:41.013 "num_base_bdevs_operational": 3, 00:09:41.013 "base_bdevs_list": [ 00:09:41.013 { 00:09:41.013 "name": null, 00:09:41.013 "uuid": "3d2e33bc-5b2f-4dea-9912-3186fdf7fcec", 00:09:41.013 "is_configured": false, 00:09:41.013 "data_offset": 0, 00:09:41.013 "data_size": 65536 00:09:41.013 }, 00:09:41.013 { 00:09:41.013 "name": null, 00:09:41.013 "uuid": "b2844c0c-1eda-4727-8a63-dc63677cee87", 00:09:41.013 "is_configured": false, 00:09:41.013 "data_offset": 0, 00:09:41.013 "data_size": 65536 00:09:41.013 }, 00:09:41.013 { 00:09:41.013 "name": "BaseBdev3", 00:09:41.013 "uuid": "9f4ff91b-46bc-4cee-933a-079e09433448", 00:09:41.013 "is_configured": true, 00:09:41.013 "data_offset": 0, 00:09:41.013 "data_size": 65536 00:09:41.013 } 00:09:41.013 ] 00:09:41.013 }' 00:09:41.014 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.014 09:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.591 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:41.591 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.591 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.591 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.591 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.591 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:41.591 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:41.591 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.591 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.591 [2024-10-15 09:08:59.252436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:41.591 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.591 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:41.591 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.591 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.591 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.591 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.591 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.591 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.591 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.591 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.591 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.591 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.591 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.591 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.591 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.591 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.591 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.591 "name": "Existed_Raid", 00:09:41.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.591 "strip_size_kb": 0, 00:09:41.591 "state": "configuring", 00:09:41.591 "raid_level": "raid1", 00:09:41.591 "superblock": false, 00:09:41.591 "num_base_bdevs": 3, 00:09:41.591 "num_base_bdevs_discovered": 2, 00:09:41.591 "num_base_bdevs_operational": 3, 00:09:41.591 "base_bdevs_list": [ 00:09:41.591 { 00:09:41.591 "name": null, 00:09:41.591 "uuid": "3d2e33bc-5b2f-4dea-9912-3186fdf7fcec", 00:09:41.591 "is_configured": false, 00:09:41.591 "data_offset": 0, 00:09:41.591 "data_size": 65536 00:09:41.591 }, 00:09:41.591 { 00:09:41.591 "name": "BaseBdev2", 00:09:41.591 "uuid": "b2844c0c-1eda-4727-8a63-dc63677cee87", 00:09:41.591 "is_configured": true, 00:09:41.591 "data_offset": 0, 00:09:41.591 "data_size": 65536 00:09:41.591 }, 00:09:41.591 { 00:09:41.591 "name": "BaseBdev3", 00:09:41.591 "uuid": "9f4ff91b-46bc-4cee-933a-079e09433448", 00:09:41.591 "is_configured": true, 00:09:41.591 "data_offset": 0, 00:09:41.591 "data_size": 65536 00:09:41.591 } 00:09:41.591 ] 00:09:41.591 }' 00:09:41.591 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.591 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.851 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.851 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:41.851 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.851 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.851 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.111 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:42.111 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3d2e33bc-5b2f-4dea-9912-3186fdf7fcec 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.112 [2024-10-15 09:08:59.855110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:42.112 [2024-10-15 09:08:59.855185] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:42.112 [2024-10-15 09:08:59.855196] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:42.112 [2024-10-15 09:08:59.855544] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:42.112 [2024-10-15 09:08:59.855822] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:42.112 NewBaseBdev 00:09:42.112 [2024-10-15 09:08:59.855967] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:42.112 [2024-10-15 09:08:59.856314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.112 [ 00:09:42.112 { 00:09:42.112 "name": "NewBaseBdev", 00:09:42.112 "aliases": [ 00:09:42.112 "3d2e33bc-5b2f-4dea-9912-3186fdf7fcec" 00:09:42.112 ], 00:09:42.112 "product_name": "Malloc disk", 00:09:42.112 "block_size": 512, 00:09:42.112 "num_blocks": 65536, 00:09:42.112 "uuid": "3d2e33bc-5b2f-4dea-9912-3186fdf7fcec", 00:09:42.112 "assigned_rate_limits": { 00:09:42.112 "rw_ios_per_sec": 0, 00:09:42.112 "rw_mbytes_per_sec": 0, 00:09:42.112 "r_mbytes_per_sec": 0, 00:09:42.112 "w_mbytes_per_sec": 0 00:09:42.112 }, 00:09:42.112 "claimed": true, 00:09:42.112 "claim_type": "exclusive_write", 00:09:42.112 "zoned": false, 00:09:42.112 "supported_io_types": { 00:09:42.112 "read": true, 00:09:42.112 "write": true, 00:09:42.112 "unmap": true, 00:09:42.112 "flush": true, 00:09:42.112 "reset": true, 00:09:42.112 "nvme_admin": false, 00:09:42.112 "nvme_io": false, 00:09:42.112 "nvme_io_md": false, 00:09:42.112 "write_zeroes": true, 00:09:42.112 "zcopy": true, 00:09:42.112 "get_zone_info": false, 00:09:42.112 "zone_management": false, 00:09:42.112 "zone_append": false, 00:09:42.112 "compare": false, 00:09:42.112 "compare_and_write": false, 00:09:42.112 "abort": true, 00:09:42.112 "seek_hole": false, 00:09:42.112 "seek_data": false, 00:09:42.112 "copy": true, 00:09:42.112 "nvme_iov_md": false 00:09:42.112 }, 00:09:42.112 "memory_domains": [ 00:09:42.112 { 00:09:42.112 "dma_device_id": "system", 00:09:42.112 "dma_device_type": 1 00:09:42.112 }, 00:09:42.112 { 00:09:42.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.112 "dma_device_type": 2 00:09:42.112 } 00:09:42.112 ], 00:09:42.112 "driver_specific": {} 00:09:42.112 } 00:09:42.112 ] 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.112 "name": "Existed_Raid", 00:09:42.112 "uuid": "4d4db2fe-3c02-4ac6-bcfb-07a85cf3d89a", 00:09:42.112 "strip_size_kb": 0, 00:09:42.112 "state": "online", 00:09:42.112 "raid_level": "raid1", 00:09:42.112 "superblock": false, 00:09:42.112 "num_base_bdevs": 3, 00:09:42.112 "num_base_bdevs_discovered": 3, 00:09:42.112 "num_base_bdevs_operational": 3, 00:09:42.112 "base_bdevs_list": [ 00:09:42.112 { 00:09:42.112 "name": "NewBaseBdev", 00:09:42.112 "uuid": "3d2e33bc-5b2f-4dea-9912-3186fdf7fcec", 00:09:42.112 "is_configured": true, 00:09:42.112 "data_offset": 0, 00:09:42.112 "data_size": 65536 00:09:42.112 }, 00:09:42.112 { 00:09:42.112 "name": "BaseBdev2", 00:09:42.112 "uuid": "b2844c0c-1eda-4727-8a63-dc63677cee87", 00:09:42.112 "is_configured": true, 00:09:42.112 "data_offset": 0, 00:09:42.112 "data_size": 65536 00:09:42.112 }, 00:09:42.112 { 00:09:42.112 "name": "BaseBdev3", 00:09:42.112 "uuid": "9f4ff91b-46bc-4cee-933a-079e09433448", 00:09:42.112 "is_configured": true, 00:09:42.112 "data_offset": 0, 00:09:42.112 "data_size": 65536 00:09:42.112 } 00:09:42.112 ] 00:09:42.112 }' 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.112 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.682 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:42.682 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:42.682 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:42.682 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:42.682 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:42.682 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:42.682 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:42.682 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:42.682 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.682 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.682 [2024-10-15 09:09:00.334765] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.682 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.682 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:42.682 "name": "Existed_Raid", 00:09:42.682 "aliases": [ 00:09:42.682 "4d4db2fe-3c02-4ac6-bcfb-07a85cf3d89a" 00:09:42.682 ], 00:09:42.682 "product_name": "Raid Volume", 00:09:42.682 "block_size": 512, 00:09:42.682 "num_blocks": 65536, 00:09:42.682 "uuid": "4d4db2fe-3c02-4ac6-bcfb-07a85cf3d89a", 00:09:42.682 "assigned_rate_limits": { 00:09:42.682 "rw_ios_per_sec": 0, 00:09:42.682 "rw_mbytes_per_sec": 0, 00:09:42.682 "r_mbytes_per_sec": 0, 00:09:42.682 "w_mbytes_per_sec": 0 00:09:42.682 }, 00:09:42.682 "claimed": false, 00:09:42.682 "zoned": false, 00:09:42.682 "supported_io_types": { 00:09:42.682 "read": true, 00:09:42.682 "write": true, 00:09:42.682 "unmap": false, 00:09:42.682 "flush": false, 00:09:42.682 "reset": true, 00:09:42.682 "nvme_admin": false, 00:09:42.682 "nvme_io": false, 00:09:42.682 "nvme_io_md": false, 00:09:42.682 "write_zeroes": true, 00:09:42.682 "zcopy": false, 00:09:42.682 "get_zone_info": false, 00:09:42.682 "zone_management": false, 00:09:42.682 "zone_append": false, 00:09:42.682 "compare": false, 00:09:42.682 "compare_and_write": false, 00:09:42.682 "abort": false, 00:09:42.682 "seek_hole": false, 00:09:42.682 "seek_data": false, 00:09:42.682 "copy": false, 00:09:42.682 "nvme_iov_md": false 00:09:42.682 }, 00:09:42.682 "memory_domains": [ 00:09:42.682 { 00:09:42.682 "dma_device_id": "system", 00:09:42.682 "dma_device_type": 1 00:09:42.682 }, 00:09:42.682 { 00:09:42.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.682 "dma_device_type": 2 00:09:42.682 }, 00:09:42.682 { 00:09:42.682 "dma_device_id": "system", 00:09:42.682 "dma_device_type": 1 00:09:42.682 }, 00:09:42.682 { 00:09:42.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.682 "dma_device_type": 2 00:09:42.682 }, 00:09:42.682 { 00:09:42.682 "dma_device_id": "system", 00:09:42.682 "dma_device_type": 1 00:09:42.682 }, 00:09:42.682 { 00:09:42.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.682 "dma_device_type": 2 00:09:42.682 } 00:09:42.682 ], 00:09:42.682 "driver_specific": { 00:09:42.682 "raid": { 00:09:42.682 "uuid": "4d4db2fe-3c02-4ac6-bcfb-07a85cf3d89a", 00:09:42.682 "strip_size_kb": 0, 00:09:42.682 "state": "online", 00:09:42.682 "raid_level": "raid1", 00:09:42.682 "superblock": false, 00:09:42.682 "num_base_bdevs": 3, 00:09:42.682 "num_base_bdevs_discovered": 3, 00:09:42.682 "num_base_bdevs_operational": 3, 00:09:42.682 "base_bdevs_list": [ 00:09:42.682 { 00:09:42.682 "name": "NewBaseBdev", 00:09:42.682 "uuid": "3d2e33bc-5b2f-4dea-9912-3186fdf7fcec", 00:09:42.682 "is_configured": true, 00:09:42.682 "data_offset": 0, 00:09:42.682 "data_size": 65536 00:09:42.682 }, 00:09:42.682 { 00:09:42.682 "name": "BaseBdev2", 00:09:42.682 "uuid": "b2844c0c-1eda-4727-8a63-dc63677cee87", 00:09:42.682 "is_configured": true, 00:09:42.682 "data_offset": 0, 00:09:42.682 "data_size": 65536 00:09:42.682 }, 00:09:42.682 { 00:09:42.682 "name": "BaseBdev3", 00:09:42.682 "uuid": "9f4ff91b-46bc-4cee-933a-079e09433448", 00:09:42.682 "is_configured": true, 00:09:42.682 "data_offset": 0, 00:09:42.682 "data_size": 65536 00:09:42.682 } 00:09:42.682 ] 00:09:42.683 } 00:09:42.683 } 00:09:42.683 }' 00:09:42.683 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:42.683 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:42.683 BaseBdev2 00:09:42.683 BaseBdev3' 00:09:42.683 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.683 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:42.683 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.683 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.683 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:42.683 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.683 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.683 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.683 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.683 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.683 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.683 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:42.683 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.683 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.683 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.683 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.683 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.683 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.943 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.943 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:42.943 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.943 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.943 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.943 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.943 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.943 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.943 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:42.943 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.943 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.943 [2024-10-15 09:09:00.633895] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:42.943 [2024-10-15 09:09:00.634030] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:42.943 [2024-10-15 09:09:00.634162] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:42.943 [2024-10-15 09:09:00.634503] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:42.943 [2024-10-15 09:09:00.634517] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:42.943 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.943 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67476 00:09:42.943 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 67476 ']' 00:09:42.943 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 67476 00:09:42.943 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:42.943 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:42.943 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67476 00:09:42.943 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:42.944 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:42.944 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67476' 00:09:42.944 killing process with pid 67476 00:09:42.944 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 67476 00:09:42.944 [2024-10-15 09:09:00.684754] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:42.944 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 67476 00:09:43.203 [2024-10-15 09:09:01.041268] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:44.584 00:09:44.584 real 0m11.317s 00:09:44.584 user 0m17.860s 00:09:44.584 sys 0m2.003s 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.584 ************************************ 00:09:44.584 END TEST raid_state_function_test 00:09:44.584 ************************************ 00:09:44.584 09:09:02 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:44.584 09:09:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:44.584 09:09:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.584 09:09:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:44.584 ************************************ 00:09:44.584 START TEST raid_state_function_test_sb 00:09:44.584 ************************************ 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:44.584 Process raid pid: 68108 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68108 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68108' 00:09:44.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68108 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 68108 ']' 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.584 09:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:44.843 [2024-10-15 09:09:02.529733] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:09:44.843 [2024-10-15 09:09:02.529973] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.843 [2024-10-15 09:09:02.700330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.103 [2024-10-15 09:09:02.834219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.362 [2024-10-15 09:09:03.063299] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.362 [2024-10-15 09:09:03.063424] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.621 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:45.621 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:45.621 09:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:45.621 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.621 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.621 [2024-10-15 09:09:03.401011] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:45.621 [2024-10-15 09:09:03.401093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:45.621 [2024-10-15 09:09:03.401108] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:45.621 [2024-10-15 09:09:03.401123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:45.621 [2024-10-15 09:09:03.401133] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:45.621 [2024-10-15 09:09:03.401148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:45.621 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.621 09:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:45.621 09:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.621 09:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.621 09:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.621 09:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.621 09:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.621 09:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.621 09:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.621 09:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.621 09:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.621 09:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.621 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.621 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.621 09:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.621 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.621 09:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.621 "name": "Existed_Raid", 00:09:45.621 "uuid": "ce7a7576-e424-477d-a5ed-f7d755eab1f4", 00:09:45.621 "strip_size_kb": 0, 00:09:45.621 "state": "configuring", 00:09:45.621 "raid_level": "raid1", 00:09:45.621 "superblock": true, 00:09:45.621 "num_base_bdevs": 3, 00:09:45.621 "num_base_bdevs_discovered": 0, 00:09:45.621 "num_base_bdevs_operational": 3, 00:09:45.621 "base_bdevs_list": [ 00:09:45.621 { 00:09:45.621 "name": "BaseBdev1", 00:09:45.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.621 "is_configured": false, 00:09:45.621 "data_offset": 0, 00:09:45.621 "data_size": 0 00:09:45.621 }, 00:09:45.621 { 00:09:45.621 "name": "BaseBdev2", 00:09:45.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.622 "is_configured": false, 00:09:45.622 "data_offset": 0, 00:09:45.622 "data_size": 0 00:09:45.622 }, 00:09:45.622 { 00:09:45.622 "name": "BaseBdev3", 00:09:45.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.622 "is_configured": false, 00:09:45.622 "data_offset": 0, 00:09:45.622 "data_size": 0 00:09:45.622 } 00:09:45.622 ] 00:09:45.622 }' 00:09:45.622 09:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.622 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.211 [2024-10-15 09:09:03.840221] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:46.211 [2024-10-15 09:09:03.840379] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.211 [2024-10-15 09:09:03.852277] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:46.211 [2024-10-15 09:09:03.852446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:46.211 [2024-10-15 09:09:03.852485] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:46.211 [2024-10-15 09:09:03.852516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:46.211 [2024-10-15 09:09:03.852539] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:46.211 [2024-10-15 09:09:03.852574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.211 [2024-10-15 09:09:03.916513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:46.211 BaseBdev1 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.211 [ 00:09:46.211 { 00:09:46.211 "name": "BaseBdev1", 00:09:46.211 "aliases": [ 00:09:46.211 "adbcd267-bc52-4186-8133-03227867e55e" 00:09:46.211 ], 00:09:46.211 "product_name": "Malloc disk", 00:09:46.211 "block_size": 512, 00:09:46.211 "num_blocks": 65536, 00:09:46.211 "uuid": "adbcd267-bc52-4186-8133-03227867e55e", 00:09:46.211 "assigned_rate_limits": { 00:09:46.211 "rw_ios_per_sec": 0, 00:09:46.211 "rw_mbytes_per_sec": 0, 00:09:46.211 "r_mbytes_per_sec": 0, 00:09:46.211 "w_mbytes_per_sec": 0 00:09:46.211 }, 00:09:46.211 "claimed": true, 00:09:46.211 "claim_type": "exclusive_write", 00:09:46.211 "zoned": false, 00:09:46.211 "supported_io_types": { 00:09:46.211 "read": true, 00:09:46.211 "write": true, 00:09:46.211 "unmap": true, 00:09:46.211 "flush": true, 00:09:46.211 "reset": true, 00:09:46.211 "nvme_admin": false, 00:09:46.211 "nvme_io": false, 00:09:46.211 "nvme_io_md": false, 00:09:46.211 "write_zeroes": true, 00:09:46.211 "zcopy": true, 00:09:46.211 "get_zone_info": false, 00:09:46.211 "zone_management": false, 00:09:46.211 "zone_append": false, 00:09:46.211 "compare": false, 00:09:46.211 "compare_and_write": false, 00:09:46.211 "abort": true, 00:09:46.211 "seek_hole": false, 00:09:46.211 "seek_data": false, 00:09:46.211 "copy": true, 00:09:46.211 "nvme_iov_md": false 00:09:46.211 }, 00:09:46.211 "memory_domains": [ 00:09:46.211 { 00:09:46.211 "dma_device_id": "system", 00:09:46.211 "dma_device_type": 1 00:09:46.211 }, 00:09:46.211 { 00:09:46.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.211 "dma_device_type": 2 00:09:46.211 } 00:09:46.211 ], 00:09:46.211 "driver_specific": {} 00:09:46.211 } 00:09:46.211 ] 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.211 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.212 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.212 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.212 09:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.212 "name": "Existed_Raid", 00:09:46.212 "uuid": "be0fba7a-5d6d-41b6-8737-78d5c01d16c6", 00:09:46.212 "strip_size_kb": 0, 00:09:46.212 "state": "configuring", 00:09:46.212 "raid_level": "raid1", 00:09:46.212 "superblock": true, 00:09:46.212 "num_base_bdevs": 3, 00:09:46.212 "num_base_bdevs_discovered": 1, 00:09:46.212 "num_base_bdevs_operational": 3, 00:09:46.212 "base_bdevs_list": [ 00:09:46.212 { 00:09:46.212 "name": "BaseBdev1", 00:09:46.212 "uuid": "adbcd267-bc52-4186-8133-03227867e55e", 00:09:46.212 "is_configured": true, 00:09:46.212 "data_offset": 2048, 00:09:46.212 "data_size": 63488 00:09:46.212 }, 00:09:46.212 { 00:09:46.212 "name": "BaseBdev2", 00:09:46.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.212 "is_configured": false, 00:09:46.212 "data_offset": 0, 00:09:46.212 "data_size": 0 00:09:46.212 }, 00:09:46.212 { 00:09:46.212 "name": "BaseBdev3", 00:09:46.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.212 "is_configured": false, 00:09:46.212 "data_offset": 0, 00:09:46.212 "data_size": 0 00:09:46.212 } 00:09:46.212 ] 00:09:46.212 }' 00:09:46.212 09:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.212 09:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.781 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:46.781 09:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.781 09:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.782 [2024-10-15 09:09:04.379856] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:46.782 [2024-10-15 09:09:04.379945] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:46.782 09:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.782 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:46.782 09:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.782 09:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.782 [2024-10-15 09:09:04.387906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:46.782 [2024-10-15 09:09:04.390296] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:46.782 [2024-10-15 09:09:04.390358] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:46.782 [2024-10-15 09:09:04.390371] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:46.782 [2024-10-15 09:09:04.390383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:46.782 09:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.782 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:46.782 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:46.782 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:46.782 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.782 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.782 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.782 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.782 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.782 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.782 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.782 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.782 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.782 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.782 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.782 09:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.782 09:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.782 09:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.782 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.782 "name": "Existed_Raid", 00:09:46.782 "uuid": "5f146ee5-e3a7-47b6-8c50-5e88a4e464a9", 00:09:46.782 "strip_size_kb": 0, 00:09:46.782 "state": "configuring", 00:09:46.782 "raid_level": "raid1", 00:09:46.782 "superblock": true, 00:09:46.782 "num_base_bdevs": 3, 00:09:46.782 "num_base_bdevs_discovered": 1, 00:09:46.782 "num_base_bdevs_operational": 3, 00:09:46.782 "base_bdevs_list": [ 00:09:46.782 { 00:09:46.782 "name": "BaseBdev1", 00:09:46.782 "uuid": "adbcd267-bc52-4186-8133-03227867e55e", 00:09:46.782 "is_configured": true, 00:09:46.782 "data_offset": 2048, 00:09:46.782 "data_size": 63488 00:09:46.782 }, 00:09:46.782 { 00:09:46.782 "name": "BaseBdev2", 00:09:46.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.782 "is_configured": false, 00:09:46.782 "data_offset": 0, 00:09:46.782 "data_size": 0 00:09:46.782 }, 00:09:46.782 { 00:09:46.782 "name": "BaseBdev3", 00:09:46.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.782 "is_configured": false, 00:09:46.782 "data_offset": 0, 00:09:46.782 "data_size": 0 00:09:46.782 } 00:09:46.782 ] 00:09:46.782 }' 00:09:46.782 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.782 09:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.041 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:47.041 09:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.041 09:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.041 [2024-10-15 09:09:04.903490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:47.041 BaseBdev2 00:09:47.041 09:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.041 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:47.041 09:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:47.041 09:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:47.041 09:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:47.041 09:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:47.041 09:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:47.041 09:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:47.041 09:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.041 09:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.041 09:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.041 09:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:47.042 09:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.042 09:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.042 [ 00:09:47.042 { 00:09:47.042 "name": "BaseBdev2", 00:09:47.042 "aliases": [ 00:09:47.042 "71a0952a-b2e7-464f-88a6-828bd4a362aa" 00:09:47.042 ], 00:09:47.042 "product_name": "Malloc disk", 00:09:47.042 "block_size": 512, 00:09:47.042 "num_blocks": 65536, 00:09:47.042 "uuid": "71a0952a-b2e7-464f-88a6-828bd4a362aa", 00:09:47.042 "assigned_rate_limits": { 00:09:47.042 "rw_ios_per_sec": 0, 00:09:47.042 "rw_mbytes_per_sec": 0, 00:09:47.042 "r_mbytes_per_sec": 0, 00:09:47.042 "w_mbytes_per_sec": 0 00:09:47.042 }, 00:09:47.042 "claimed": true, 00:09:47.042 "claim_type": "exclusive_write", 00:09:47.042 "zoned": false, 00:09:47.042 "supported_io_types": { 00:09:47.042 "read": true, 00:09:47.042 "write": true, 00:09:47.042 "unmap": true, 00:09:47.042 "flush": true, 00:09:47.301 "reset": true, 00:09:47.301 "nvme_admin": false, 00:09:47.301 "nvme_io": false, 00:09:47.301 "nvme_io_md": false, 00:09:47.301 "write_zeroes": true, 00:09:47.301 "zcopy": true, 00:09:47.301 "get_zone_info": false, 00:09:47.301 "zone_management": false, 00:09:47.301 "zone_append": false, 00:09:47.301 "compare": false, 00:09:47.301 "compare_and_write": false, 00:09:47.301 "abort": true, 00:09:47.301 "seek_hole": false, 00:09:47.301 "seek_data": false, 00:09:47.301 "copy": true, 00:09:47.301 "nvme_iov_md": false 00:09:47.301 }, 00:09:47.301 "memory_domains": [ 00:09:47.301 { 00:09:47.301 "dma_device_id": "system", 00:09:47.301 "dma_device_type": 1 00:09:47.301 }, 00:09:47.301 { 00:09:47.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.301 "dma_device_type": 2 00:09:47.301 } 00:09:47.301 ], 00:09:47.301 "driver_specific": {} 00:09:47.301 } 00:09:47.301 ] 00:09:47.301 09:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.301 09:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:47.301 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:47.301 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:47.301 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:47.301 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.301 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.301 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.301 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.301 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.301 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.301 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.301 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.301 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.301 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.301 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.301 09:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.301 09:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.301 09:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.301 09:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.301 "name": "Existed_Raid", 00:09:47.301 "uuid": "5f146ee5-e3a7-47b6-8c50-5e88a4e464a9", 00:09:47.301 "strip_size_kb": 0, 00:09:47.301 "state": "configuring", 00:09:47.301 "raid_level": "raid1", 00:09:47.301 "superblock": true, 00:09:47.301 "num_base_bdevs": 3, 00:09:47.301 "num_base_bdevs_discovered": 2, 00:09:47.301 "num_base_bdevs_operational": 3, 00:09:47.301 "base_bdevs_list": [ 00:09:47.301 { 00:09:47.301 "name": "BaseBdev1", 00:09:47.301 "uuid": "adbcd267-bc52-4186-8133-03227867e55e", 00:09:47.301 "is_configured": true, 00:09:47.301 "data_offset": 2048, 00:09:47.301 "data_size": 63488 00:09:47.301 }, 00:09:47.301 { 00:09:47.301 "name": "BaseBdev2", 00:09:47.301 "uuid": "71a0952a-b2e7-464f-88a6-828bd4a362aa", 00:09:47.301 "is_configured": true, 00:09:47.301 "data_offset": 2048, 00:09:47.301 "data_size": 63488 00:09:47.301 }, 00:09:47.301 { 00:09:47.301 "name": "BaseBdev3", 00:09:47.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.301 "is_configured": false, 00:09:47.301 "data_offset": 0, 00:09:47.301 "data_size": 0 00:09:47.301 } 00:09:47.301 ] 00:09:47.301 }' 00:09:47.301 09:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.301 09:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.607 09:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:47.607 09:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.607 09:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.868 [2024-10-15 09:09:05.506653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:47.868 [2024-10-15 09:09:05.507083] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:47.868 [2024-10-15 09:09:05.507119] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:47.868 [2024-10-15 09:09:05.507498] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:47.868 [2024-10-15 09:09:05.507739] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:47.868 [2024-10-15 09:09:05.507754] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:47.868 [2024-10-15 09:09:05.507953] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.868 BaseBdev3 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.868 [ 00:09:47.868 { 00:09:47.868 "name": "BaseBdev3", 00:09:47.868 "aliases": [ 00:09:47.868 "94230454-a694-451d-b07a-d39c41f5b29f" 00:09:47.868 ], 00:09:47.868 "product_name": "Malloc disk", 00:09:47.868 "block_size": 512, 00:09:47.868 "num_blocks": 65536, 00:09:47.868 "uuid": "94230454-a694-451d-b07a-d39c41f5b29f", 00:09:47.868 "assigned_rate_limits": { 00:09:47.868 "rw_ios_per_sec": 0, 00:09:47.868 "rw_mbytes_per_sec": 0, 00:09:47.868 "r_mbytes_per_sec": 0, 00:09:47.868 "w_mbytes_per_sec": 0 00:09:47.868 }, 00:09:47.868 "claimed": true, 00:09:47.868 "claim_type": "exclusive_write", 00:09:47.868 "zoned": false, 00:09:47.868 "supported_io_types": { 00:09:47.868 "read": true, 00:09:47.868 "write": true, 00:09:47.868 "unmap": true, 00:09:47.868 "flush": true, 00:09:47.868 "reset": true, 00:09:47.868 "nvme_admin": false, 00:09:47.868 "nvme_io": false, 00:09:47.868 "nvme_io_md": false, 00:09:47.868 "write_zeroes": true, 00:09:47.868 "zcopy": true, 00:09:47.868 "get_zone_info": false, 00:09:47.868 "zone_management": false, 00:09:47.868 "zone_append": false, 00:09:47.868 "compare": false, 00:09:47.868 "compare_and_write": false, 00:09:47.868 "abort": true, 00:09:47.868 "seek_hole": false, 00:09:47.868 "seek_data": false, 00:09:47.868 "copy": true, 00:09:47.868 "nvme_iov_md": false 00:09:47.868 }, 00:09:47.868 "memory_domains": [ 00:09:47.868 { 00:09:47.868 "dma_device_id": "system", 00:09:47.868 "dma_device_type": 1 00:09:47.868 }, 00:09:47.868 { 00:09:47.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.868 "dma_device_type": 2 00:09:47.868 } 00:09:47.868 ], 00:09:47.868 "driver_specific": {} 00:09:47.868 } 00:09:47.868 ] 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.868 "name": "Existed_Raid", 00:09:47.868 "uuid": "5f146ee5-e3a7-47b6-8c50-5e88a4e464a9", 00:09:47.868 "strip_size_kb": 0, 00:09:47.868 "state": "online", 00:09:47.868 "raid_level": "raid1", 00:09:47.868 "superblock": true, 00:09:47.868 "num_base_bdevs": 3, 00:09:47.868 "num_base_bdevs_discovered": 3, 00:09:47.868 "num_base_bdevs_operational": 3, 00:09:47.868 "base_bdevs_list": [ 00:09:47.868 { 00:09:47.868 "name": "BaseBdev1", 00:09:47.868 "uuid": "adbcd267-bc52-4186-8133-03227867e55e", 00:09:47.868 "is_configured": true, 00:09:47.868 "data_offset": 2048, 00:09:47.868 "data_size": 63488 00:09:47.868 }, 00:09:47.868 { 00:09:47.868 "name": "BaseBdev2", 00:09:47.868 "uuid": "71a0952a-b2e7-464f-88a6-828bd4a362aa", 00:09:47.868 "is_configured": true, 00:09:47.868 "data_offset": 2048, 00:09:47.868 "data_size": 63488 00:09:47.868 }, 00:09:47.868 { 00:09:47.868 "name": "BaseBdev3", 00:09:47.868 "uuid": "94230454-a694-451d-b07a-d39c41f5b29f", 00:09:47.868 "is_configured": true, 00:09:47.868 "data_offset": 2048, 00:09:47.868 "data_size": 63488 00:09:47.868 } 00:09:47.868 ] 00:09:47.868 }' 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.868 09:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.128 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:48.128 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:48.128 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:48.128 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:48.128 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:48.128 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:48.128 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:48.128 09:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.128 09:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.128 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:48.128 [2024-10-15 09:09:06.014301] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:48.453 09:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.453 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:48.453 "name": "Existed_Raid", 00:09:48.453 "aliases": [ 00:09:48.453 "5f146ee5-e3a7-47b6-8c50-5e88a4e464a9" 00:09:48.453 ], 00:09:48.453 "product_name": "Raid Volume", 00:09:48.453 "block_size": 512, 00:09:48.453 "num_blocks": 63488, 00:09:48.453 "uuid": "5f146ee5-e3a7-47b6-8c50-5e88a4e464a9", 00:09:48.453 "assigned_rate_limits": { 00:09:48.453 "rw_ios_per_sec": 0, 00:09:48.453 "rw_mbytes_per_sec": 0, 00:09:48.453 "r_mbytes_per_sec": 0, 00:09:48.453 "w_mbytes_per_sec": 0 00:09:48.453 }, 00:09:48.453 "claimed": false, 00:09:48.453 "zoned": false, 00:09:48.453 "supported_io_types": { 00:09:48.453 "read": true, 00:09:48.453 "write": true, 00:09:48.453 "unmap": false, 00:09:48.453 "flush": false, 00:09:48.453 "reset": true, 00:09:48.453 "nvme_admin": false, 00:09:48.453 "nvme_io": false, 00:09:48.453 "nvme_io_md": false, 00:09:48.453 "write_zeroes": true, 00:09:48.453 "zcopy": false, 00:09:48.453 "get_zone_info": false, 00:09:48.453 "zone_management": false, 00:09:48.453 "zone_append": false, 00:09:48.453 "compare": false, 00:09:48.453 "compare_and_write": false, 00:09:48.453 "abort": false, 00:09:48.453 "seek_hole": false, 00:09:48.453 "seek_data": false, 00:09:48.453 "copy": false, 00:09:48.453 "nvme_iov_md": false 00:09:48.453 }, 00:09:48.453 "memory_domains": [ 00:09:48.453 { 00:09:48.453 "dma_device_id": "system", 00:09:48.453 "dma_device_type": 1 00:09:48.453 }, 00:09:48.453 { 00:09:48.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.453 "dma_device_type": 2 00:09:48.453 }, 00:09:48.453 { 00:09:48.453 "dma_device_id": "system", 00:09:48.453 "dma_device_type": 1 00:09:48.453 }, 00:09:48.453 { 00:09:48.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.453 "dma_device_type": 2 00:09:48.453 }, 00:09:48.453 { 00:09:48.453 "dma_device_id": "system", 00:09:48.453 "dma_device_type": 1 00:09:48.453 }, 00:09:48.453 { 00:09:48.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.453 "dma_device_type": 2 00:09:48.453 } 00:09:48.453 ], 00:09:48.453 "driver_specific": { 00:09:48.453 "raid": { 00:09:48.453 "uuid": "5f146ee5-e3a7-47b6-8c50-5e88a4e464a9", 00:09:48.453 "strip_size_kb": 0, 00:09:48.453 "state": "online", 00:09:48.453 "raid_level": "raid1", 00:09:48.453 "superblock": true, 00:09:48.453 "num_base_bdevs": 3, 00:09:48.453 "num_base_bdevs_discovered": 3, 00:09:48.453 "num_base_bdevs_operational": 3, 00:09:48.453 "base_bdevs_list": [ 00:09:48.453 { 00:09:48.453 "name": "BaseBdev1", 00:09:48.453 "uuid": "adbcd267-bc52-4186-8133-03227867e55e", 00:09:48.453 "is_configured": true, 00:09:48.453 "data_offset": 2048, 00:09:48.453 "data_size": 63488 00:09:48.453 }, 00:09:48.453 { 00:09:48.453 "name": "BaseBdev2", 00:09:48.453 "uuid": "71a0952a-b2e7-464f-88a6-828bd4a362aa", 00:09:48.453 "is_configured": true, 00:09:48.453 "data_offset": 2048, 00:09:48.453 "data_size": 63488 00:09:48.453 }, 00:09:48.453 { 00:09:48.453 "name": "BaseBdev3", 00:09:48.453 "uuid": "94230454-a694-451d-b07a-d39c41f5b29f", 00:09:48.453 "is_configured": true, 00:09:48.453 "data_offset": 2048, 00:09:48.453 "data_size": 63488 00:09:48.453 } 00:09:48.453 ] 00:09:48.453 } 00:09:48.453 } 00:09:48.453 }' 00:09:48.453 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:48.453 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:48.453 BaseBdev2 00:09:48.453 BaseBdev3' 00:09:48.453 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.453 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:48.453 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.453 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:48.453 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.453 09:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.453 09:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.453 09:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.453 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.453 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.453 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.454 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:48.454 09:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.454 09:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.454 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.454 09:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.454 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.454 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.454 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.454 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:48.454 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.454 09:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.454 09:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.454 09:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.454 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.454 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.454 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:48.454 09:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.454 09:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.454 [2024-10-15 09:09:06.317559] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:48.714 09:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.714 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:48.714 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:48.714 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:48.714 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:48.714 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:48.714 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:48.714 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.714 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.714 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.714 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.714 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:48.714 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.714 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.714 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.714 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.714 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.714 09:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.714 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.714 09:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.714 09:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.714 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.714 "name": "Existed_Raid", 00:09:48.714 "uuid": "5f146ee5-e3a7-47b6-8c50-5e88a4e464a9", 00:09:48.714 "strip_size_kb": 0, 00:09:48.714 "state": "online", 00:09:48.714 "raid_level": "raid1", 00:09:48.714 "superblock": true, 00:09:48.714 "num_base_bdevs": 3, 00:09:48.714 "num_base_bdevs_discovered": 2, 00:09:48.714 "num_base_bdevs_operational": 2, 00:09:48.714 "base_bdevs_list": [ 00:09:48.714 { 00:09:48.714 "name": null, 00:09:48.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.714 "is_configured": false, 00:09:48.714 "data_offset": 0, 00:09:48.714 "data_size": 63488 00:09:48.714 }, 00:09:48.714 { 00:09:48.714 "name": "BaseBdev2", 00:09:48.714 "uuid": "71a0952a-b2e7-464f-88a6-828bd4a362aa", 00:09:48.714 "is_configured": true, 00:09:48.714 "data_offset": 2048, 00:09:48.714 "data_size": 63488 00:09:48.714 }, 00:09:48.714 { 00:09:48.714 "name": "BaseBdev3", 00:09:48.714 "uuid": "94230454-a694-451d-b07a-d39c41f5b29f", 00:09:48.714 "is_configured": true, 00:09:48.714 "data_offset": 2048, 00:09:48.714 "data_size": 63488 00:09:48.714 } 00:09:48.714 ] 00:09:48.714 }' 00:09:48.714 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.714 09:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.285 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:49.285 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:49.285 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:49.285 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.285 09:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.285 09:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.285 09:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.285 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:49.285 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:49.285 09:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:49.285 09:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.285 09:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.285 [2024-10-15 09:09:06.980786] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:49.285 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.285 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:49.285 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:49.285 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.285 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.285 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:49.285 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.285 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.285 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:49.285 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:49.285 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:49.285 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.285 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.285 [2024-10-15 09:09:07.157251] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:49.285 [2024-10-15 09:09:07.157495] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:49.545 [2024-10-15 09:09:07.264801] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:49.545 [2024-10-15 09:09:07.265035] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:49.545 [2024-10-15 09:09:07.265101] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:49.545 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.545 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:49.545 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:49.545 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.545 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:49.545 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.545 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.545 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.545 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:49.545 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:49.545 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:49.545 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:49.545 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:49.545 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:49.545 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.545 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.545 BaseBdev2 00:09:49.545 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.545 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:49.545 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:49.545 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:49.545 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:49.545 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:49.545 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:49.545 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:49.545 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.545 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.545 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.545 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:49.546 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.546 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.546 [ 00:09:49.546 { 00:09:49.546 "name": "BaseBdev2", 00:09:49.546 "aliases": [ 00:09:49.546 "969f0f65-8cb0-4c3d-9368-ecc3f0c0a5c5" 00:09:49.546 ], 00:09:49.546 "product_name": "Malloc disk", 00:09:49.546 "block_size": 512, 00:09:49.546 "num_blocks": 65536, 00:09:49.546 "uuid": "969f0f65-8cb0-4c3d-9368-ecc3f0c0a5c5", 00:09:49.546 "assigned_rate_limits": { 00:09:49.546 "rw_ios_per_sec": 0, 00:09:49.546 "rw_mbytes_per_sec": 0, 00:09:49.546 "r_mbytes_per_sec": 0, 00:09:49.546 "w_mbytes_per_sec": 0 00:09:49.546 }, 00:09:49.546 "claimed": false, 00:09:49.546 "zoned": false, 00:09:49.546 "supported_io_types": { 00:09:49.546 "read": true, 00:09:49.546 "write": true, 00:09:49.546 "unmap": true, 00:09:49.546 "flush": true, 00:09:49.546 "reset": true, 00:09:49.546 "nvme_admin": false, 00:09:49.546 "nvme_io": false, 00:09:49.546 "nvme_io_md": false, 00:09:49.546 "write_zeroes": true, 00:09:49.546 "zcopy": true, 00:09:49.546 "get_zone_info": false, 00:09:49.546 "zone_management": false, 00:09:49.546 "zone_append": false, 00:09:49.546 "compare": false, 00:09:49.546 "compare_and_write": false, 00:09:49.546 "abort": true, 00:09:49.546 "seek_hole": false, 00:09:49.546 "seek_data": false, 00:09:49.546 "copy": true, 00:09:49.546 "nvme_iov_md": false 00:09:49.546 }, 00:09:49.546 "memory_domains": [ 00:09:49.546 { 00:09:49.546 "dma_device_id": "system", 00:09:49.546 "dma_device_type": 1 00:09:49.546 }, 00:09:49.546 { 00:09:49.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.546 "dma_device_type": 2 00:09:49.546 } 00:09:49.546 ], 00:09:49.546 "driver_specific": {} 00:09:49.546 } 00:09:49.546 ] 00:09:49.546 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.546 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:49.546 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:49.546 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:49.546 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:49.546 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.546 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.806 BaseBdev3 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.806 [ 00:09:49.806 { 00:09:49.806 "name": "BaseBdev3", 00:09:49.806 "aliases": [ 00:09:49.806 "2b07ff9b-ef48-4aed-b3e1-31364b29cd2a" 00:09:49.806 ], 00:09:49.806 "product_name": "Malloc disk", 00:09:49.806 "block_size": 512, 00:09:49.806 "num_blocks": 65536, 00:09:49.806 "uuid": "2b07ff9b-ef48-4aed-b3e1-31364b29cd2a", 00:09:49.806 "assigned_rate_limits": { 00:09:49.806 "rw_ios_per_sec": 0, 00:09:49.806 "rw_mbytes_per_sec": 0, 00:09:49.806 "r_mbytes_per_sec": 0, 00:09:49.806 "w_mbytes_per_sec": 0 00:09:49.806 }, 00:09:49.806 "claimed": false, 00:09:49.806 "zoned": false, 00:09:49.806 "supported_io_types": { 00:09:49.806 "read": true, 00:09:49.806 "write": true, 00:09:49.806 "unmap": true, 00:09:49.806 "flush": true, 00:09:49.806 "reset": true, 00:09:49.806 "nvme_admin": false, 00:09:49.806 "nvme_io": false, 00:09:49.806 "nvme_io_md": false, 00:09:49.806 "write_zeroes": true, 00:09:49.806 "zcopy": true, 00:09:49.806 "get_zone_info": false, 00:09:49.806 "zone_management": false, 00:09:49.806 "zone_append": false, 00:09:49.806 "compare": false, 00:09:49.806 "compare_and_write": false, 00:09:49.806 "abort": true, 00:09:49.806 "seek_hole": false, 00:09:49.806 "seek_data": false, 00:09:49.806 "copy": true, 00:09:49.806 "nvme_iov_md": false 00:09:49.806 }, 00:09:49.806 "memory_domains": [ 00:09:49.806 { 00:09:49.806 "dma_device_id": "system", 00:09:49.806 "dma_device_type": 1 00:09:49.806 }, 00:09:49.806 { 00:09:49.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.806 "dma_device_type": 2 00:09:49.806 } 00:09:49.806 ], 00:09:49.806 "driver_specific": {} 00:09:49.806 } 00:09:49.806 ] 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.806 [2024-10-15 09:09:07.500567] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:49.806 [2024-10-15 09:09:07.500696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:49.806 [2024-10-15 09:09:07.500760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:49.806 [2024-10-15 09:09:07.503063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.806 "name": "Existed_Raid", 00:09:49.806 "uuid": "e6f76098-b6a7-4e48-adf1-03368b30620b", 00:09:49.806 "strip_size_kb": 0, 00:09:49.806 "state": "configuring", 00:09:49.806 "raid_level": "raid1", 00:09:49.806 "superblock": true, 00:09:49.806 "num_base_bdevs": 3, 00:09:49.806 "num_base_bdevs_discovered": 2, 00:09:49.806 "num_base_bdevs_operational": 3, 00:09:49.806 "base_bdevs_list": [ 00:09:49.806 { 00:09:49.806 "name": "BaseBdev1", 00:09:49.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.806 "is_configured": false, 00:09:49.806 "data_offset": 0, 00:09:49.806 "data_size": 0 00:09:49.806 }, 00:09:49.806 { 00:09:49.806 "name": "BaseBdev2", 00:09:49.806 "uuid": "969f0f65-8cb0-4c3d-9368-ecc3f0c0a5c5", 00:09:49.806 "is_configured": true, 00:09:49.806 "data_offset": 2048, 00:09:49.806 "data_size": 63488 00:09:49.806 }, 00:09:49.806 { 00:09:49.806 "name": "BaseBdev3", 00:09:49.806 "uuid": "2b07ff9b-ef48-4aed-b3e1-31364b29cd2a", 00:09:49.806 "is_configured": true, 00:09:49.806 "data_offset": 2048, 00:09:49.806 "data_size": 63488 00:09:49.806 } 00:09:49.806 ] 00:09:49.806 }' 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.806 09:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.375 09:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:50.375 09:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.375 09:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.375 [2024-10-15 09:09:08.011747] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:50.375 09:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.375 09:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:50.375 09:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.375 09:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.375 09:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.375 09:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.375 09:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.375 09:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.375 09:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.375 09:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.375 09:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.375 09:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.375 09:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.375 09:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.375 09:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.375 09:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.375 09:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.375 "name": "Existed_Raid", 00:09:50.375 "uuid": "e6f76098-b6a7-4e48-adf1-03368b30620b", 00:09:50.375 "strip_size_kb": 0, 00:09:50.375 "state": "configuring", 00:09:50.375 "raid_level": "raid1", 00:09:50.375 "superblock": true, 00:09:50.375 "num_base_bdevs": 3, 00:09:50.375 "num_base_bdevs_discovered": 1, 00:09:50.375 "num_base_bdevs_operational": 3, 00:09:50.375 "base_bdevs_list": [ 00:09:50.375 { 00:09:50.375 "name": "BaseBdev1", 00:09:50.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.375 "is_configured": false, 00:09:50.375 "data_offset": 0, 00:09:50.375 "data_size": 0 00:09:50.375 }, 00:09:50.375 { 00:09:50.375 "name": null, 00:09:50.375 "uuid": "969f0f65-8cb0-4c3d-9368-ecc3f0c0a5c5", 00:09:50.375 "is_configured": false, 00:09:50.375 "data_offset": 0, 00:09:50.375 "data_size": 63488 00:09:50.375 }, 00:09:50.375 { 00:09:50.375 "name": "BaseBdev3", 00:09:50.375 "uuid": "2b07ff9b-ef48-4aed-b3e1-31364b29cd2a", 00:09:50.375 "is_configured": true, 00:09:50.375 "data_offset": 2048, 00:09:50.375 "data_size": 63488 00:09:50.375 } 00:09:50.375 ] 00:09:50.375 }' 00:09:50.375 09:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.375 09:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.633 09:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.633 09:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.633 09:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:50.633 09:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.633 09:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.633 09:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:50.633 09:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:50.633 09:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.633 09:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.893 [2024-10-15 09:09:08.574357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:50.893 BaseBdev1 00:09:50.893 09:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.893 09:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:50.893 09:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:50.893 09:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:50.893 09:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:50.893 09:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:50.893 09:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:50.893 09:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:50.893 09:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.893 09:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.893 09:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.893 09:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:50.893 09:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.893 09:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.893 [ 00:09:50.893 { 00:09:50.893 "name": "BaseBdev1", 00:09:50.893 "aliases": [ 00:09:50.893 "3f67e01e-944a-43da-871a-6029283f48eb" 00:09:50.893 ], 00:09:50.893 "product_name": "Malloc disk", 00:09:50.893 "block_size": 512, 00:09:50.893 "num_blocks": 65536, 00:09:50.893 "uuid": "3f67e01e-944a-43da-871a-6029283f48eb", 00:09:50.893 "assigned_rate_limits": { 00:09:50.893 "rw_ios_per_sec": 0, 00:09:50.893 "rw_mbytes_per_sec": 0, 00:09:50.893 "r_mbytes_per_sec": 0, 00:09:50.893 "w_mbytes_per_sec": 0 00:09:50.893 }, 00:09:50.893 "claimed": true, 00:09:50.893 "claim_type": "exclusive_write", 00:09:50.893 "zoned": false, 00:09:50.893 "supported_io_types": { 00:09:50.893 "read": true, 00:09:50.893 "write": true, 00:09:50.893 "unmap": true, 00:09:50.893 "flush": true, 00:09:50.893 "reset": true, 00:09:50.893 "nvme_admin": false, 00:09:50.893 "nvme_io": false, 00:09:50.893 "nvme_io_md": false, 00:09:50.893 "write_zeroes": true, 00:09:50.893 "zcopy": true, 00:09:50.893 "get_zone_info": false, 00:09:50.893 "zone_management": false, 00:09:50.893 "zone_append": false, 00:09:50.893 "compare": false, 00:09:50.893 "compare_and_write": false, 00:09:50.893 "abort": true, 00:09:50.893 "seek_hole": false, 00:09:50.893 "seek_data": false, 00:09:50.893 "copy": true, 00:09:50.893 "nvme_iov_md": false 00:09:50.893 }, 00:09:50.893 "memory_domains": [ 00:09:50.893 { 00:09:50.893 "dma_device_id": "system", 00:09:50.893 "dma_device_type": 1 00:09:50.893 }, 00:09:50.893 { 00:09:50.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.893 "dma_device_type": 2 00:09:50.893 } 00:09:50.893 ], 00:09:50.893 "driver_specific": {} 00:09:50.893 } 00:09:50.893 ] 00:09:50.893 09:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.893 09:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:50.893 09:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:50.893 09:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.893 09:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.893 09:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.893 09:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.893 09:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.893 09:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.893 09:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.893 09:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.893 09:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.893 09:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.893 09:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.893 09:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.893 09:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.893 09:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.893 09:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.893 "name": "Existed_Raid", 00:09:50.893 "uuid": "e6f76098-b6a7-4e48-adf1-03368b30620b", 00:09:50.893 "strip_size_kb": 0, 00:09:50.893 "state": "configuring", 00:09:50.893 "raid_level": "raid1", 00:09:50.893 "superblock": true, 00:09:50.893 "num_base_bdevs": 3, 00:09:50.893 "num_base_bdevs_discovered": 2, 00:09:50.893 "num_base_bdevs_operational": 3, 00:09:50.893 "base_bdevs_list": [ 00:09:50.893 { 00:09:50.893 "name": "BaseBdev1", 00:09:50.893 "uuid": "3f67e01e-944a-43da-871a-6029283f48eb", 00:09:50.893 "is_configured": true, 00:09:50.893 "data_offset": 2048, 00:09:50.893 "data_size": 63488 00:09:50.893 }, 00:09:50.893 { 00:09:50.893 "name": null, 00:09:50.893 "uuid": "969f0f65-8cb0-4c3d-9368-ecc3f0c0a5c5", 00:09:50.893 "is_configured": false, 00:09:50.893 "data_offset": 0, 00:09:50.893 "data_size": 63488 00:09:50.893 }, 00:09:50.893 { 00:09:50.893 "name": "BaseBdev3", 00:09:50.893 "uuid": "2b07ff9b-ef48-4aed-b3e1-31364b29cd2a", 00:09:50.893 "is_configured": true, 00:09:50.893 "data_offset": 2048, 00:09:50.893 "data_size": 63488 00:09:50.893 } 00:09:50.893 ] 00:09:50.893 }' 00:09:50.893 09:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.893 09:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.461 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.461 09:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.461 09:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.461 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:51.461 09:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.461 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:51.461 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:51.461 09:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.461 09:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.461 [2024-10-15 09:09:09.153495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:51.461 09:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.461 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:51.461 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.461 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.461 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.461 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.461 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.461 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.461 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.461 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.461 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.461 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.461 09:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.462 09:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.462 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.462 09:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.462 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.462 "name": "Existed_Raid", 00:09:51.462 "uuid": "e6f76098-b6a7-4e48-adf1-03368b30620b", 00:09:51.462 "strip_size_kb": 0, 00:09:51.462 "state": "configuring", 00:09:51.462 "raid_level": "raid1", 00:09:51.462 "superblock": true, 00:09:51.462 "num_base_bdevs": 3, 00:09:51.462 "num_base_bdevs_discovered": 1, 00:09:51.462 "num_base_bdevs_operational": 3, 00:09:51.462 "base_bdevs_list": [ 00:09:51.462 { 00:09:51.462 "name": "BaseBdev1", 00:09:51.462 "uuid": "3f67e01e-944a-43da-871a-6029283f48eb", 00:09:51.462 "is_configured": true, 00:09:51.462 "data_offset": 2048, 00:09:51.462 "data_size": 63488 00:09:51.462 }, 00:09:51.462 { 00:09:51.462 "name": null, 00:09:51.462 "uuid": "969f0f65-8cb0-4c3d-9368-ecc3f0c0a5c5", 00:09:51.462 "is_configured": false, 00:09:51.462 "data_offset": 0, 00:09:51.462 "data_size": 63488 00:09:51.462 }, 00:09:51.462 { 00:09:51.462 "name": null, 00:09:51.462 "uuid": "2b07ff9b-ef48-4aed-b3e1-31364b29cd2a", 00:09:51.462 "is_configured": false, 00:09:51.462 "data_offset": 0, 00:09:51.462 "data_size": 63488 00:09:51.462 } 00:09:51.462 ] 00:09:51.462 }' 00:09:51.462 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.462 09:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.030 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.030 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:52.030 09:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.030 09:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.030 09:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.030 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:52.030 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:52.030 09:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.030 09:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.030 [2024-10-15 09:09:09.676712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:52.030 09:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.030 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:52.030 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.030 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.030 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.030 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.030 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.030 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.030 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.030 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.030 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.030 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.030 09:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.030 09:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.030 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.030 09:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.030 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.030 "name": "Existed_Raid", 00:09:52.030 "uuid": "e6f76098-b6a7-4e48-adf1-03368b30620b", 00:09:52.030 "strip_size_kb": 0, 00:09:52.030 "state": "configuring", 00:09:52.030 "raid_level": "raid1", 00:09:52.030 "superblock": true, 00:09:52.030 "num_base_bdevs": 3, 00:09:52.030 "num_base_bdevs_discovered": 2, 00:09:52.030 "num_base_bdevs_operational": 3, 00:09:52.030 "base_bdevs_list": [ 00:09:52.030 { 00:09:52.030 "name": "BaseBdev1", 00:09:52.030 "uuid": "3f67e01e-944a-43da-871a-6029283f48eb", 00:09:52.030 "is_configured": true, 00:09:52.030 "data_offset": 2048, 00:09:52.030 "data_size": 63488 00:09:52.030 }, 00:09:52.030 { 00:09:52.030 "name": null, 00:09:52.030 "uuid": "969f0f65-8cb0-4c3d-9368-ecc3f0c0a5c5", 00:09:52.030 "is_configured": false, 00:09:52.030 "data_offset": 0, 00:09:52.030 "data_size": 63488 00:09:52.030 }, 00:09:52.030 { 00:09:52.030 "name": "BaseBdev3", 00:09:52.030 "uuid": "2b07ff9b-ef48-4aed-b3e1-31364b29cd2a", 00:09:52.030 "is_configured": true, 00:09:52.030 "data_offset": 2048, 00:09:52.030 "data_size": 63488 00:09:52.030 } 00:09:52.030 ] 00:09:52.030 }' 00:09:52.030 09:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.030 09:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.290 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:52.290 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.290 09:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.290 09:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.290 09:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.550 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:52.550 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:52.550 09:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.550 09:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.550 [2024-10-15 09:09:10.199832] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:52.550 09:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.550 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:52.550 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.550 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.550 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.550 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.550 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.550 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.550 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.550 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.550 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.550 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.550 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.550 09:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.550 09:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.550 09:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.550 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.550 "name": "Existed_Raid", 00:09:52.550 "uuid": "e6f76098-b6a7-4e48-adf1-03368b30620b", 00:09:52.550 "strip_size_kb": 0, 00:09:52.550 "state": "configuring", 00:09:52.550 "raid_level": "raid1", 00:09:52.550 "superblock": true, 00:09:52.550 "num_base_bdevs": 3, 00:09:52.550 "num_base_bdevs_discovered": 1, 00:09:52.550 "num_base_bdevs_operational": 3, 00:09:52.550 "base_bdevs_list": [ 00:09:52.550 { 00:09:52.550 "name": null, 00:09:52.550 "uuid": "3f67e01e-944a-43da-871a-6029283f48eb", 00:09:52.550 "is_configured": false, 00:09:52.550 "data_offset": 0, 00:09:52.550 "data_size": 63488 00:09:52.550 }, 00:09:52.550 { 00:09:52.550 "name": null, 00:09:52.550 "uuid": "969f0f65-8cb0-4c3d-9368-ecc3f0c0a5c5", 00:09:52.550 "is_configured": false, 00:09:52.550 "data_offset": 0, 00:09:52.550 "data_size": 63488 00:09:52.550 }, 00:09:52.550 { 00:09:52.550 "name": "BaseBdev3", 00:09:52.550 "uuid": "2b07ff9b-ef48-4aed-b3e1-31364b29cd2a", 00:09:52.550 "is_configured": true, 00:09:52.550 "data_offset": 2048, 00:09:52.550 "data_size": 63488 00:09:52.550 } 00:09:52.550 ] 00:09:52.550 }' 00:09:52.550 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.550 09:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.119 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.119 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:53.119 09:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.119 09:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.119 09:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.119 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:53.119 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:53.119 09:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.119 09:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.119 [2024-10-15 09:09:10.803524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:53.119 09:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.119 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:53.119 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.119 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.119 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.119 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.119 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.119 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.119 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.119 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.119 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.119 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.119 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.119 09:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.119 09:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.119 09:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.119 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.119 "name": "Existed_Raid", 00:09:53.119 "uuid": "e6f76098-b6a7-4e48-adf1-03368b30620b", 00:09:53.119 "strip_size_kb": 0, 00:09:53.119 "state": "configuring", 00:09:53.119 "raid_level": "raid1", 00:09:53.119 "superblock": true, 00:09:53.119 "num_base_bdevs": 3, 00:09:53.119 "num_base_bdevs_discovered": 2, 00:09:53.119 "num_base_bdevs_operational": 3, 00:09:53.119 "base_bdevs_list": [ 00:09:53.119 { 00:09:53.119 "name": null, 00:09:53.119 "uuid": "3f67e01e-944a-43da-871a-6029283f48eb", 00:09:53.119 "is_configured": false, 00:09:53.119 "data_offset": 0, 00:09:53.119 "data_size": 63488 00:09:53.119 }, 00:09:53.119 { 00:09:53.119 "name": "BaseBdev2", 00:09:53.119 "uuid": "969f0f65-8cb0-4c3d-9368-ecc3f0c0a5c5", 00:09:53.119 "is_configured": true, 00:09:53.119 "data_offset": 2048, 00:09:53.119 "data_size": 63488 00:09:53.119 }, 00:09:53.119 { 00:09:53.119 "name": "BaseBdev3", 00:09:53.119 "uuid": "2b07ff9b-ef48-4aed-b3e1-31364b29cd2a", 00:09:53.119 "is_configured": true, 00:09:53.119 "data_offset": 2048, 00:09:53.119 "data_size": 63488 00:09:53.119 } 00:09:53.119 ] 00:09:53.119 }' 00:09:53.119 09:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.119 09:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.379 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:53.379 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.379 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.379 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.379 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.688 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:53.688 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:53.688 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.688 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.688 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.688 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.688 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3f67e01e-944a-43da-871a-6029283f48eb 00:09:53.688 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.688 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.688 [2024-10-15 09:09:11.371359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:53.688 [2024-10-15 09:09:11.371739] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:53.688 [2024-10-15 09:09:11.371757] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:53.688 [2024-10-15 09:09:11.372100] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:53.688 NewBaseBdev 00:09:53.688 [2024-10-15 09:09:11.372305] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:53.688 [2024-10-15 09:09:11.372333] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:53.688 [2024-10-15 09:09:11.372522] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:53.688 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.688 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:53.688 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:53.688 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:53.688 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:53.688 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:53.688 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:53.688 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:53.688 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.688 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.688 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.689 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:53.689 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.689 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.689 [ 00:09:53.689 { 00:09:53.689 "name": "NewBaseBdev", 00:09:53.689 "aliases": [ 00:09:53.689 "3f67e01e-944a-43da-871a-6029283f48eb" 00:09:53.689 ], 00:09:53.689 "product_name": "Malloc disk", 00:09:53.689 "block_size": 512, 00:09:53.689 "num_blocks": 65536, 00:09:53.689 "uuid": "3f67e01e-944a-43da-871a-6029283f48eb", 00:09:53.689 "assigned_rate_limits": { 00:09:53.689 "rw_ios_per_sec": 0, 00:09:53.689 "rw_mbytes_per_sec": 0, 00:09:53.689 "r_mbytes_per_sec": 0, 00:09:53.689 "w_mbytes_per_sec": 0 00:09:53.689 }, 00:09:53.689 "claimed": true, 00:09:53.689 "claim_type": "exclusive_write", 00:09:53.689 "zoned": false, 00:09:53.689 "supported_io_types": { 00:09:53.689 "read": true, 00:09:53.689 "write": true, 00:09:53.689 "unmap": true, 00:09:53.689 "flush": true, 00:09:53.689 "reset": true, 00:09:53.689 "nvme_admin": false, 00:09:53.689 "nvme_io": false, 00:09:53.689 "nvme_io_md": false, 00:09:53.689 "write_zeroes": true, 00:09:53.689 "zcopy": true, 00:09:53.689 "get_zone_info": false, 00:09:53.689 "zone_management": false, 00:09:53.689 "zone_append": false, 00:09:53.689 "compare": false, 00:09:53.689 "compare_and_write": false, 00:09:53.689 "abort": true, 00:09:53.689 "seek_hole": false, 00:09:53.689 "seek_data": false, 00:09:53.689 "copy": true, 00:09:53.689 "nvme_iov_md": false 00:09:53.689 }, 00:09:53.689 "memory_domains": [ 00:09:53.689 { 00:09:53.689 "dma_device_id": "system", 00:09:53.689 "dma_device_type": 1 00:09:53.689 }, 00:09:53.689 { 00:09:53.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.689 "dma_device_type": 2 00:09:53.689 } 00:09:53.689 ], 00:09:53.689 "driver_specific": {} 00:09:53.689 } 00:09:53.689 ] 00:09:53.689 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.689 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:53.689 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:53.689 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.689 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:53.689 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.689 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.689 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.689 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.689 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.689 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.689 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.689 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.689 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.689 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.689 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.689 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.689 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.689 "name": "Existed_Raid", 00:09:53.689 "uuid": "e6f76098-b6a7-4e48-adf1-03368b30620b", 00:09:53.689 "strip_size_kb": 0, 00:09:53.689 "state": "online", 00:09:53.689 "raid_level": "raid1", 00:09:53.689 "superblock": true, 00:09:53.689 "num_base_bdevs": 3, 00:09:53.689 "num_base_bdevs_discovered": 3, 00:09:53.689 "num_base_bdevs_operational": 3, 00:09:53.689 "base_bdevs_list": [ 00:09:53.689 { 00:09:53.689 "name": "NewBaseBdev", 00:09:53.689 "uuid": "3f67e01e-944a-43da-871a-6029283f48eb", 00:09:53.689 "is_configured": true, 00:09:53.689 "data_offset": 2048, 00:09:53.689 "data_size": 63488 00:09:53.689 }, 00:09:53.689 { 00:09:53.689 "name": "BaseBdev2", 00:09:53.689 "uuid": "969f0f65-8cb0-4c3d-9368-ecc3f0c0a5c5", 00:09:53.689 "is_configured": true, 00:09:53.689 "data_offset": 2048, 00:09:53.689 "data_size": 63488 00:09:53.689 }, 00:09:53.689 { 00:09:53.689 "name": "BaseBdev3", 00:09:53.689 "uuid": "2b07ff9b-ef48-4aed-b3e1-31364b29cd2a", 00:09:53.689 "is_configured": true, 00:09:53.689 "data_offset": 2048, 00:09:53.689 "data_size": 63488 00:09:53.689 } 00:09:53.689 ] 00:09:53.689 }' 00:09:53.689 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.689 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.257 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:54.257 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:54.257 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:54.257 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:54.257 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:54.257 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:54.257 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:54.257 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:54.257 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.257 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.257 [2024-10-15 09:09:11.874972] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:54.257 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.257 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:54.257 "name": "Existed_Raid", 00:09:54.257 "aliases": [ 00:09:54.257 "e6f76098-b6a7-4e48-adf1-03368b30620b" 00:09:54.257 ], 00:09:54.257 "product_name": "Raid Volume", 00:09:54.257 "block_size": 512, 00:09:54.257 "num_blocks": 63488, 00:09:54.257 "uuid": "e6f76098-b6a7-4e48-adf1-03368b30620b", 00:09:54.257 "assigned_rate_limits": { 00:09:54.257 "rw_ios_per_sec": 0, 00:09:54.257 "rw_mbytes_per_sec": 0, 00:09:54.257 "r_mbytes_per_sec": 0, 00:09:54.257 "w_mbytes_per_sec": 0 00:09:54.257 }, 00:09:54.257 "claimed": false, 00:09:54.257 "zoned": false, 00:09:54.257 "supported_io_types": { 00:09:54.257 "read": true, 00:09:54.257 "write": true, 00:09:54.257 "unmap": false, 00:09:54.257 "flush": false, 00:09:54.257 "reset": true, 00:09:54.257 "nvme_admin": false, 00:09:54.257 "nvme_io": false, 00:09:54.257 "nvme_io_md": false, 00:09:54.258 "write_zeroes": true, 00:09:54.258 "zcopy": false, 00:09:54.258 "get_zone_info": false, 00:09:54.258 "zone_management": false, 00:09:54.258 "zone_append": false, 00:09:54.258 "compare": false, 00:09:54.258 "compare_and_write": false, 00:09:54.258 "abort": false, 00:09:54.258 "seek_hole": false, 00:09:54.258 "seek_data": false, 00:09:54.258 "copy": false, 00:09:54.258 "nvme_iov_md": false 00:09:54.258 }, 00:09:54.258 "memory_domains": [ 00:09:54.258 { 00:09:54.258 "dma_device_id": "system", 00:09:54.258 "dma_device_type": 1 00:09:54.258 }, 00:09:54.258 { 00:09:54.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.258 "dma_device_type": 2 00:09:54.258 }, 00:09:54.258 { 00:09:54.258 "dma_device_id": "system", 00:09:54.258 "dma_device_type": 1 00:09:54.258 }, 00:09:54.258 { 00:09:54.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.258 "dma_device_type": 2 00:09:54.258 }, 00:09:54.258 { 00:09:54.258 "dma_device_id": "system", 00:09:54.258 "dma_device_type": 1 00:09:54.258 }, 00:09:54.258 { 00:09:54.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.258 "dma_device_type": 2 00:09:54.258 } 00:09:54.258 ], 00:09:54.258 "driver_specific": { 00:09:54.258 "raid": { 00:09:54.258 "uuid": "e6f76098-b6a7-4e48-adf1-03368b30620b", 00:09:54.258 "strip_size_kb": 0, 00:09:54.258 "state": "online", 00:09:54.258 "raid_level": "raid1", 00:09:54.258 "superblock": true, 00:09:54.258 "num_base_bdevs": 3, 00:09:54.258 "num_base_bdevs_discovered": 3, 00:09:54.258 "num_base_bdevs_operational": 3, 00:09:54.258 "base_bdevs_list": [ 00:09:54.258 { 00:09:54.258 "name": "NewBaseBdev", 00:09:54.258 "uuid": "3f67e01e-944a-43da-871a-6029283f48eb", 00:09:54.258 "is_configured": true, 00:09:54.258 "data_offset": 2048, 00:09:54.258 "data_size": 63488 00:09:54.258 }, 00:09:54.258 { 00:09:54.258 "name": "BaseBdev2", 00:09:54.258 "uuid": "969f0f65-8cb0-4c3d-9368-ecc3f0c0a5c5", 00:09:54.258 "is_configured": true, 00:09:54.258 "data_offset": 2048, 00:09:54.258 "data_size": 63488 00:09:54.258 }, 00:09:54.258 { 00:09:54.258 "name": "BaseBdev3", 00:09:54.258 "uuid": "2b07ff9b-ef48-4aed-b3e1-31364b29cd2a", 00:09:54.258 "is_configured": true, 00:09:54.258 "data_offset": 2048, 00:09:54.258 "data_size": 63488 00:09:54.258 } 00:09:54.258 ] 00:09:54.258 } 00:09:54.258 } 00:09:54.258 }' 00:09:54.258 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:54.258 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:54.258 BaseBdev2 00:09:54.258 BaseBdev3' 00:09:54.258 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.258 09:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:54.258 09:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:54.258 09:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:54.258 09:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.258 09:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.258 09:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.258 09:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.258 09:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:54.258 09:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:54.258 09:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:54.258 09:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:54.258 09:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.258 09:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.258 09:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.258 09:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.258 09:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:54.258 09:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:54.258 09:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:54.258 09:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:54.258 09:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.258 09:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:54.258 09:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.258 09:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.258 09:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:54.258 09:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:54.258 09:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:54.258 09:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.258 09:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.518 [2024-10-15 09:09:12.154141] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:54.518 [2024-10-15 09:09:12.154261] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:54.518 [2024-10-15 09:09:12.154425] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:54.518 [2024-10-15 09:09:12.154851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:54.518 [2024-10-15 09:09:12.154929] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:54.518 09:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.518 09:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68108 00:09:54.518 09:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 68108 ']' 00:09:54.518 09:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 68108 00:09:54.518 09:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:54.518 09:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:54.518 09:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68108 00:09:54.518 09:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:54.518 killing process with pid 68108 00:09:54.518 09:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:54.518 09:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68108' 00:09:54.518 09:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 68108 00:09:54.518 [2024-10-15 09:09:12.204023] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:54.518 09:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 68108 00:09:54.778 [2024-10-15 09:09:12.576180] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:56.159 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:56.159 00:09:56.159 real 0m11.438s 00:09:56.159 user 0m17.910s 00:09:56.159 sys 0m2.052s 00:09:56.159 ************************************ 00:09:56.159 END TEST raid_state_function_test_sb 00:09:56.159 ************************************ 00:09:56.159 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:56.159 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.159 09:09:13 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:56.159 09:09:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:56.159 09:09:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:56.159 09:09:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:56.159 ************************************ 00:09:56.159 START TEST raid_superblock_test 00:09:56.159 ************************************ 00:09:56.159 09:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:09:56.159 09:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:56.159 09:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:56.159 09:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:56.159 09:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:56.159 09:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:56.159 09:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:56.159 09:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:56.159 09:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:56.159 09:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:56.159 09:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:56.159 09:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:56.159 09:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:56.159 09:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:56.159 09:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:56.159 09:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:56.159 09:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68734 00:09:56.159 09:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:56.159 09:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68734 00:09:56.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.159 09:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 68734 ']' 00:09:56.159 09:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.159 09:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:56.159 09:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.159 09:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:56.159 09:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.159 [2024-10-15 09:09:14.030120] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:09:56.159 [2024-10-15 09:09:14.030258] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68734 ] 00:09:56.419 [2024-10-15 09:09:14.196378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.678 [2024-10-15 09:09:14.318076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.678 [2024-10-15 09:09:14.534536] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.678 [2024-10-15 09:09:14.534607] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.246 09:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:57.246 09:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:57.246 09:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:57.246 09:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:57.246 09:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:57.246 09:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:57.246 09:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:57.246 09:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:57.246 09:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:57.246 09:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:57.246 09:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:57.246 09:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.246 09:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.246 malloc1 00:09:57.246 09:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.246 09:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:57.246 09:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.246 09:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.246 [2024-10-15 09:09:14.947047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:57.246 [2024-10-15 09:09:14.947223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.246 [2024-10-15 09:09:14.947267] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:57.246 [2024-10-15 09:09:14.947297] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.246 [2024-10-15 09:09:14.949527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.247 [2024-10-15 09:09:14.949606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:57.247 pt1 00:09:57.247 09:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.247 09:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:57.247 09:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:57.247 09:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:57.247 09:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:57.247 09:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:57.247 09:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:57.247 09:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:57.247 09:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:57.247 09:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:57.247 09:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.247 09:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.247 malloc2 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.247 [2024-10-15 09:09:15.010131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:57.247 [2024-10-15 09:09:15.010311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.247 [2024-10-15 09:09:15.010347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:57.247 [2024-10-15 09:09:15.010358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.247 [2024-10-15 09:09:15.012830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.247 [2024-10-15 09:09:15.012875] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:57.247 pt2 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.247 malloc3 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.247 [2024-10-15 09:09:15.081608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:57.247 [2024-10-15 09:09:15.081816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.247 [2024-10-15 09:09:15.081862] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:57.247 [2024-10-15 09:09:15.081898] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.247 [2024-10-15 09:09:15.084102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.247 [2024-10-15 09:09:15.084201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:57.247 pt3 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.247 [2024-10-15 09:09:15.093680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:57.247 [2024-10-15 09:09:15.095735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:57.247 [2024-10-15 09:09:15.095871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:57.247 [2024-10-15 09:09:15.096080] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:57.247 [2024-10-15 09:09:15.096096] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:57.247 [2024-10-15 09:09:15.096404] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:57.247 [2024-10-15 09:09:15.096612] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:57.247 [2024-10-15 09:09:15.096624] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:57.247 [2024-10-15 09:09:15.096899] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.247 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.506 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.506 "name": "raid_bdev1", 00:09:57.506 "uuid": "c9e21295-6108-4c30-906b-43149dd14b6d", 00:09:57.506 "strip_size_kb": 0, 00:09:57.506 "state": "online", 00:09:57.506 "raid_level": "raid1", 00:09:57.506 "superblock": true, 00:09:57.506 "num_base_bdevs": 3, 00:09:57.506 "num_base_bdevs_discovered": 3, 00:09:57.506 "num_base_bdevs_operational": 3, 00:09:57.506 "base_bdevs_list": [ 00:09:57.506 { 00:09:57.506 "name": "pt1", 00:09:57.506 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:57.506 "is_configured": true, 00:09:57.506 "data_offset": 2048, 00:09:57.506 "data_size": 63488 00:09:57.506 }, 00:09:57.506 { 00:09:57.506 "name": "pt2", 00:09:57.506 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:57.506 "is_configured": true, 00:09:57.506 "data_offset": 2048, 00:09:57.506 "data_size": 63488 00:09:57.506 }, 00:09:57.506 { 00:09:57.506 "name": "pt3", 00:09:57.506 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:57.506 "is_configured": true, 00:09:57.506 "data_offset": 2048, 00:09:57.506 "data_size": 63488 00:09:57.506 } 00:09:57.506 ] 00:09:57.506 }' 00:09:57.506 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.506 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.765 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:57.765 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:57.765 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:57.765 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:57.765 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:57.765 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:57.765 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:57.765 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:57.765 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.765 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.765 [2024-10-15 09:09:15.541303] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:57.765 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.765 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:57.765 "name": "raid_bdev1", 00:09:57.765 "aliases": [ 00:09:57.765 "c9e21295-6108-4c30-906b-43149dd14b6d" 00:09:57.765 ], 00:09:57.765 "product_name": "Raid Volume", 00:09:57.765 "block_size": 512, 00:09:57.765 "num_blocks": 63488, 00:09:57.765 "uuid": "c9e21295-6108-4c30-906b-43149dd14b6d", 00:09:57.766 "assigned_rate_limits": { 00:09:57.766 "rw_ios_per_sec": 0, 00:09:57.766 "rw_mbytes_per_sec": 0, 00:09:57.766 "r_mbytes_per_sec": 0, 00:09:57.766 "w_mbytes_per_sec": 0 00:09:57.766 }, 00:09:57.766 "claimed": false, 00:09:57.766 "zoned": false, 00:09:57.766 "supported_io_types": { 00:09:57.766 "read": true, 00:09:57.766 "write": true, 00:09:57.766 "unmap": false, 00:09:57.766 "flush": false, 00:09:57.766 "reset": true, 00:09:57.766 "nvme_admin": false, 00:09:57.766 "nvme_io": false, 00:09:57.766 "nvme_io_md": false, 00:09:57.766 "write_zeroes": true, 00:09:57.766 "zcopy": false, 00:09:57.766 "get_zone_info": false, 00:09:57.766 "zone_management": false, 00:09:57.766 "zone_append": false, 00:09:57.766 "compare": false, 00:09:57.766 "compare_and_write": false, 00:09:57.766 "abort": false, 00:09:57.766 "seek_hole": false, 00:09:57.766 "seek_data": false, 00:09:57.766 "copy": false, 00:09:57.766 "nvme_iov_md": false 00:09:57.766 }, 00:09:57.766 "memory_domains": [ 00:09:57.766 { 00:09:57.766 "dma_device_id": "system", 00:09:57.766 "dma_device_type": 1 00:09:57.766 }, 00:09:57.766 { 00:09:57.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.766 "dma_device_type": 2 00:09:57.766 }, 00:09:57.766 { 00:09:57.766 "dma_device_id": "system", 00:09:57.766 "dma_device_type": 1 00:09:57.766 }, 00:09:57.766 { 00:09:57.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.766 "dma_device_type": 2 00:09:57.766 }, 00:09:57.766 { 00:09:57.766 "dma_device_id": "system", 00:09:57.766 "dma_device_type": 1 00:09:57.766 }, 00:09:57.766 { 00:09:57.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.766 "dma_device_type": 2 00:09:57.766 } 00:09:57.766 ], 00:09:57.766 "driver_specific": { 00:09:57.766 "raid": { 00:09:57.766 "uuid": "c9e21295-6108-4c30-906b-43149dd14b6d", 00:09:57.766 "strip_size_kb": 0, 00:09:57.766 "state": "online", 00:09:57.766 "raid_level": "raid1", 00:09:57.766 "superblock": true, 00:09:57.766 "num_base_bdevs": 3, 00:09:57.766 "num_base_bdevs_discovered": 3, 00:09:57.766 "num_base_bdevs_operational": 3, 00:09:57.766 "base_bdevs_list": [ 00:09:57.766 { 00:09:57.766 "name": "pt1", 00:09:57.766 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:57.766 "is_configured": true, 00:09:57.766 "data_offset": 2048, 00:09:57.766 "data_size": 63488 00:09:57.766 }, 00:09:57.766 { 00:09:57.766 "name": "pt2", 00:09:57.766 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:57.766 "is_configured": true, 00:09:57.766 "data_offset": 2048, 00:09:57.766 "data_size": 63488 00:09:57.766 }, 00:09:57.766 { 00:09:57.766 "name": "pt3", 00:09:57.766 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:57.766 "is_configured": true, 00:09:57.766 "data_offset": 2048, 00:09:57.766 "data_size": 63488 00:09:57.766 } 00:09:57.766 ] 00:09:57.766 } 00:09:57.766 } 00:09:57.766 }' 00:09:57.766 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:57.766 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:57.766 pt2 00:09:57.766 pt3' 00:09:57.766 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:58.026 [2024-10-15 09:09:15.820814] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c9e21295-6108-4c30-906b-43149dd14b6d 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c9e21295-6108-4c30-906b-43149dd14b6d ']' 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.026 [2024-10-15 09:09:15.868433] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:58.026 [2024-10-15 09:09:15.868561] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:58.026 [2024-10-15 09:09:15.868666] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:58.026 [2024-10-15 09:09:15.868782] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:58.026 [2024-10-15 09:09:15.868796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:58.026 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.286 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:58.286 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:58.286 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:58.286 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:58.286 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.286 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.286 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.286 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:58.286 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:58.286 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.286 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.286 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.286 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:58.286 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:58.286 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.286 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.286 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.286 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:58.286 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.286 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.286 09:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:58.286 09:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.286 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:58.286 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:58.286 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:58.286 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:58.286 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:58.286 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:58.286 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:58.286 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.287 [2024-10-15 09:09:16.028197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:58.287 [2024-10-15 09:09:16.030354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:58.287 [2024-10-15 09:09:16.030454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:58.287 [2024-10-15 09:09:16.030525] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:58.287 [2024-10-15 09:09:16.030623] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:58.287 [2024-10-15 09:09:16.030679] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:58.287 [2024-10-15 09:09:16.030751] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:58.287 [2024-10-15 09:09:16.030807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:58.287 request: 00:09:58.287 { 00:09:58.287 "name": "raid_bdev1", 00:09:58.287 "raid_level": "raid1", 00:09:58.287 "base_bdevs": [ 00:09:58.287 "malloc1", 00:09:58.287 "malloc2", 00:09:58.287 "malloc3" 00:09:58.287 ], 00:09:58.287 "superblock": false, 00:09:58.287 "method": "bdev_raid_create", 00:09:58.287 "req_id": 1 00:09:58.287 } 00:09:58.287 Got JSON-RPC error response 00:09:58.287 response: 00:09:58.287 { 00:09:58.287 "code": -17, 00:09:58.287 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:58.287 } 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.287 [2024-10-15 09:09:16.100055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:58.287 [2024-10-15 09:09:16.100261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.287 [2024-10-15 09:09:16.100307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:58.287 [2024-10-15 09:09:16.100337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.287 [2024-10-15 09:09:16.102643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.287 [2024-10-15 09:09:16.102775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:58.287 [2024-10-15 09:09:16.102901] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:58.287 [2024-10-15 09:09:16.102993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:58.287 pt1 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.287 "name": "raid_bdev1", 00:09:58.287 "uuid": "c9e21295-6108-4c30-906b-43149dd14b6d", 00:09:58.287 "strip_size_kb": 0, 00:09:58.287 "state": "configuring", 00:09:58.287 "raid_level": "raid1", 00:09:58.287 "superblock": true, 00:09:58.287 "num_base_bdevs": 3, 00:09:58.287 "num_base_bdevs_discovered": 1, 00:09:58.287 "num_base_bdevs_operational": 3, 00:09:58.287 "base_bdevs_list": [ 00:09:58.287 { 00:09:58.287 "name": "pt1", 00:09:58.287 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:58.287 "is_configured": true, 00:09:58.287 "data_offset": 2048, 00:09:58.287 "data_size": 63488 00:09:58.287 }, 00:09:58.287 { 00:09:58.287 "name": null, 00:09:58.287 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:58.287 "is_configured": false, 00:09:58.287 "data_offset": 2048, 00:09:58.287 "data_size": 63488 00:09:58.287 }, 00:09:58.287 { 00:09:58.287 "name": null, 00:09:58.287 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:58.287 "is_configured": false, 00:09:58.287 "data_offset": 2048, 00:09:58.287 "data_size": 63488 00:09:58.287 } 00:09:58.287 ] 00:09:58.287 }' 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.287 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.856 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:58.856 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:58.856 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.856 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.856 [2024-10-15 09:09:16.551281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:58.856 [2024-10-15 09:09:16.551456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.856 [2024-10-15 09:09:16.551500] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:58.856 [2024-10-15 09:09:16.551533] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.856 [2024-10-15 09:09:16.552057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.856 [2024-10-15 09:09:16.552124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:58.856 [2024-10-15 09:09:16.552257] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:58.856 [2024-10-15 09:09:16.552314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:58.856 pt2 00:09:58.856 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.856 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:58.856 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.856 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.856 [2024-10-15 09:09:16.563249] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:58.856 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.856 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:58.856 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:58.856 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.856 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.856 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.856 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.856 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.856 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.856 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.856 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.856 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.856 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:58.856 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.856 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.856 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.856 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.856 "name": "raid_bdev1", 00:09:58.856 "uuid": "c9e21295-6108-4c30-906b-43149dd14b6d", 00:09:58.856 "strip_size_kb": 0, 00:09:58.856 "state": "configuring", 00:09:58.856 "raid_level": "raid1", 00:09:58.856 "superblock": true, 00:09:58.856 "num_base_bdevs": 3, 00:09:58.856 "num_base_bdevs_discovered": 1, 00:09:58.856 "num_base_bdevs_operational": 3, 00:09:58.856 "base_bdevs_list": [ 00:09:58.856 { 00:09:58.856 "name": "pt1", 00:09:58.856 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:58.856 "is_configured": true, 00:09:58.856 "data_offset": 2048, 00:09:58.856 "data_size": 63488 00:09:58.856 }, 00:09:58.856 { 00:09:58.856 "name": null, 00:09:58.856 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:58.856 "is_configured": false, 00:09:58.856 "data_offset": 0, 00:09:58.856 "data_size": 63488 00:09:58.856 }, 00:09:58.856 { 00:09:58.856 "name": null, 00:09:58.856 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:58.856 "is_configured": false, 00:09:58.856 "data_offset": 2048, 00:09:58.856 "data_size": 63488 00:09:58.856 } 00:09:58.856 ] 00:09:58.856 }' 00:09:58.856 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.856 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.116 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:59.116 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:59.116 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:59.116 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.116 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.116 [2024-10-15 09:09:16.970514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:59.116 [2024-10-15 09:09:16.970713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.116 [2024-10-15 09:09:16.970760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:59.116 [2024-10-15 09:09:16.970796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.116 [2024-10-15 09:09:16.971286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.116 [2024-10-15 09:09:16.971346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:59.116 [2024-10-15 09:09:16.971457] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:59.116 [2024-10-15 09:09:16.971527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:59.116 pt2 00:09:59.116 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.116 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:59.116 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:59.116 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:59.116 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.116 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.116 [2024-10-15 09:09:16.982487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:59.116 [2024-10-15 09:09:16.982584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.116 [2024-10-15 09:09:16.982608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:59.116 [2024-10-15 09:09:16.982621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.116 [2024-10-15 09:09:16.982991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.116 [2024-10-15 09:09:16.983013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:59.116 [2024-10-15 09:09:16.983075] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:59.116 [2024-10-15 09:09:16.983095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:59.116 [2024-10-15 09:09:16.983212] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:59.116 [2024-10-15 09:09:16.983224] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:59.116 [2024-10-15 09:09:16.983442] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:59.116 [2024-10-15 09:09:16.983584] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:59.116 [2024-10-15 09:09:16.983594] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:59.116 [2024-10-15 09:09:16.983758] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.116 pt3 00:09:59.116 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.116 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:59.116 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:59.116 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:59.116 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:59.116 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.116 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.116 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.116 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.116 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.116 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.116 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.116 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.116 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.116 09:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:59.116 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.116 09:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.376 09:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.376 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.376 "name": "raid_bdev1", 00:09:59.376 "uuid": "c9e21295-6108-4c30-906b-43149dd14b6d", 00:09:59.376 "strip_size_kb": 0, 00:09:59.376 "state": "online", 00:09:59.376 "raid_level": "raid1", 00:09:59.376 "superblock": true, 00:09:59.376 "num_base_bdevs": 3, 00:09:59.376 "num_base_bdevs_discovered": 3, 00:09:59.376 "num_base_bdevs_operational": 3, 00:09:59.376 "base_bdevs_list": [ 00:09:59.376 { 00:09:59.376 "name": "pt1", 00:09:59.376 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:59.376 "is_configured": true, 00:09:59.376 "data_offset": 2048, 00:09:59.376 "data_size": 63488 00:09:59.376 }, 00:09:59.376 { 00:09:59.376 "name": "pt2", 00:09:59.376 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:59.376 "is_configured": true, 00:09:59.376 "data_offset": 2048, 00:09:59.376 "data_size": 63488 00:09:59.376 }, 00:09:59.376 { 00:09:59.376 "name": "pt3", 00:09:59.376 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:59.376 "is_configured": true, 00:09:59.376 "data_offset": 2048, 00:09:59.376 "data_size": 63488 00:09:59.376 } 00:09:59.376 ] 00:09:59.376 }' 00:09:59.376 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.376 09:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.636 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:59.636 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:59.636 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:59.636 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:59.636 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:59.636 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:59.636 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:59.636 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:59.636 09:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.636 09:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.636 [2024-10-15 09:09:17.458088] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:59.636 09:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.636 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:59.636 "name": "raid_bdev1", 00:09:59.636 "aliases": [ 00:09:59.636 "c9e21295-6108-4c30-906b-43149dd14b6d" 00:09:59.636 ], 00:09:59.636 "product_name": "Raid Volume", 00:09:59.636 "block_size": 512, 00:09:59.636 "num_blocks": 63488, 00:09:59.636 "uuid": "c9e21295-6108-4c30-906b-43149dd14b6d", 00:09:59.636 "assigned_rate_limits": { 00:09:59.636 "rw_ios_per_sec": 0, 00:09:59.636 "rw_mbytes_per_sec": 0, 00:09:59.636 "r_mbytes_per_sec": 0, 00:09:59.636 "w_mbytes_per_sec": 0 00:09:59.636 }, 00:09:59.636 "claimed": false, 00:09:59.636 "zoned": false, 00:09:59.636 "supported_io_types": { 00:09:59.636 "read": true, 00:09:59.636 "write": true, 00:09:59.636 "unmap": false, 00:09:59.636 "flush": false, 00:09:59.636 "reset": true, 00:09:59.636 "nvme_admin": false, 00:09:59.636 "nvme_io": false, 00:09:59.636 "nvme_io_md": false, 00:09:59.636 "write_zeroes": true, 00:09:59.636 "zcopy": false, 00:09:59.636 "get_zone_info": false, 00:09:59.636 "zone_management": false, 00:09:59.636 "zone_append": false, 00:09:59.636 "compare": false, 00:09:59.636 "compare_and_write": false, 00:09:59.636 "abort": false, 00:09:59.636 "seek_hole": false, 00:09:59.636 "seek_data": false, 00:09:59.636 "copy": false, 00:09:59.636 "nvme_iov_md": false 00:09:59.636 }, 00:09:59.636 "memory_domains": [ 00:09:59.636 { 00:09:59.636 "dma_device_id": "system", 00:09:59.636 "dma_device_type": 1 00:09:59.636 }, 00:09:59.636 { 00:09:59.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.636 "dma_device_type": 2 00:09:59.636 }, 00:09:59.636 { 00:09:59.636 "dma_device_id": "system", 00:09:59.636 "dma_device_type": 1 00:09:59.636 }, 00:09:59.636 { 00:09:59.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.636 "dma_device_type": 2 00:09:59.636 }, 00:09:59.636 { 00:09:59.636 "dma_device_id": "system", 00:09:59.636 "dma_device_type": 1 00:09:59.636 }, 00:09:59.636 { 00:09:59.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.636 "dma_device_type": 2 00:09:59.636 } 00:09:59.636 ], 00:09:59.636 "driver_specific": { 00:09:59.636 "raid": { 00:09:59.636 "uuid": "c9e21295-6108-4c30-906b-43149dd14b6d", 00:09:59.636 "strip_size_kb": 0, 00:09:59.636 "state": "online", 00:09:59.636 "raid_level": "raid1", 00:09:59.636 "superblock": true, 00:09:59.636 "num_base_bdevs": 3, 00:09:59.636 "num_base_bdevs_discovered": 3, 00:09:59.636 "num_base_bdevs_operational": 3, 00:09:59.636 "base_bdevs_list": [ 00:09:59.636 { 00:09:59.636 "name": "pt1", 00:09:59.636 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:59.636 "is_configured": true, 00:09:59.636 "data_offset": 2048, 00:09:59.636 "data_size": 63488 00:09:59.636 }, 00:09:59.636 { 00:09:59.636 "name": "pt2", 00:09:59.636 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:59.636 "is_configured": true, 00:09:59.636 "data_offset": 2048, 00:09:59.636 "data_size": 63488 00:09:59.636 }, 00:09:59.636 { 00:09:59.636 "name": "pt3", 00:09:59.636 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:59.636 "is_configured": true, 00:09:59.636 "data_offset": 2048, 00:09:59.636 "data_size": 63488 00:09:59.636 } 00:09:59.636 ] 00:09:59.636 } 00:09:59.636 } 00:09:59.636 }' 00:09:59.636 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:59.896 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:59.896 pt2 00:09:59.896 pt3' 00:09:59.896 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.896 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:59.896 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.896 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:59.896 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.896 09:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.896 09:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.896 09:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.896 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.896 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.896 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.896 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:59.896 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.896 09:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.896 09:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.896 09:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.896 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.896 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.896 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.896 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.896 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:59.896 09:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.896 09:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.896 09:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.896 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.896 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.896 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:59.896 09:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.896 09:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.896 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:59.896 [2024-10-15 09:09:17.761403] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:59.896 09:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.155 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c9e21295-6108-4c30-906b-43149dd14b6d '!=' c9e21295-6108-4c30-906b-43149dd14b6d ']' 00:10:00.155 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:00.155 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:00.155 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:00.155 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:00.155 09:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.155 09:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.155 [2024-10-15 09:09:17.813133] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:00.155 09:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.155 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:00.155 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:00.155 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:00.155 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.155 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.155 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:00.155 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.155 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.155 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.155 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.155 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.155 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:00.155 09:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.155 09:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.155 09:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.155 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.155 "name": "raid_bdev1", 00:10:00.155 "uuid": "c9e21295-6108-4c30-906b-43149dd14b6d", 00:10:00.155 "strip_size_kb": 0, 00:10:00.155 "state": "online", 00:10:00.155 "raid_level": "raid1", 00:10:00.155 "superblock": true, 00:10:00.155 "num_base_bdevs": 3, 00:10:00.155 "num_base_bdevs_discovered": 2, 00:10:00.155 "num_base_bdevs_operational": 2, 00:10:00.156 "base_bdevs_list": [ 00:10:00.156 { 00:10:00.156 "name": null, 00:10:00.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.156 "is_configured": false, 00:10:00.156 "data_offset": 0, 00:10:00.156 "data_size": 63488 00:10:00.156 }, 00:10:00.156 { 00:10:00.156 "name": "pt2", 00:10:00.156 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:00.156 "is_configured": true, 00:10:00.156 "data_offset": 2048, 00:10:00.156 "data_size": 63488 00:10:00.156 }, 00:10:00.156 { 00:10:00.156 "name": "pt3", 00:10:00.156 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:00.156 "is_configured": true, 00:10:00.156 "data_offset": 2048, 00:10:00.156 "data_size": 63488 00:10:00.156 } 00:10:00.156 ] 00:10:00.156 }' 00:10:00.156 09:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.156 09:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.415 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:00.415 09:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.415 09:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.415 [2024-10-15 09:09:18.288292] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:00.415 [2024-10-15 09:09:18.288330] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:00.415 [2024-10-15 09:09:18.288417] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:00.415 [2024-10-15 09:09:18.288478] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:00.415 [2024-10-15 09:09:18.288493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:00.415 09:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.415 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.415 09:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.415 09:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.415 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:00.415 09:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.674 [2024-10-15 09:09:18.380162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:00.674 [2024-10-15 09:09:18.380341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.674 [2024-10-15 09:09:18.380379] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:00.674 [2024-10-15 09:09:18.380410] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.674 [2024-10-15 09:09:18.382805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.674 [2024-10-15 09:09:18.382889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:00.674 [2024-10-15 09:09:18.383007] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:00.674 [2024-10-15 09:09:18.383103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:00.674 pt2 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.674 "name": "raid_bdev1", 00:10:00.674 "uuid": "c9e21295-6108-4c30-906b-43149dd14b6d", 00:10:00.674 "strip_size_kb": 0, 00:10:00.674 "state": "configuring", 00:10:00.674 "raid_level": "raid1", 00:10:00.674 "superblock": true, 00:10:00.674 "num_base_bdevs": 3, 00:10:00.674 "num_base_bdevs_discovered": 1, 00:10:00.674 "num_base_bdevs_operational": 2, 00:10:00.674 "base_bdevs_list": [ 00:10:00.674 { 00:10:00.674 "name": null, 00:10:00.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.674 "is_configured": false, 00:10:00.674 "data_offset": 2048, 00:10:00.674 "data_size": 63488 00:10:00.674 }, 00:10:00.674 { 00:10:00.674 "name": "pt2", 00:10:00.674 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:00.674 "is_configured": true, 00:10:00.674 "data_offset": 2048, 00:10:00.674 "data_size": 63488 00:10:00.674 }, 00:10:00.674 { 00:10:00.674 "name": null, 00:10:00.674 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:00.674 "is_configured": false, 00:10:00.674 "data_offset": 2048, 00:10:00.674 "data_size": 63488 00:10:00.674 } 00:10:00.674 ] 00:10:00.674 }' 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.674 09:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.934 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:00.934 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:00.934 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:10:00.934 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:00.934 09:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.934 09:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.935 [2024-10-15 09:09:18.803450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:00.935 [2024-10-15 09:09:18.803630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.935 [2024-10-15 09:09:18.803670] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:00.935 [2024-10-15 09:09:18.803714] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.935 [2024-10-15 09:09:18.804202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.935 [2024-10-15 09:09:18.804270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:00.935 [2024-10-15 09:09:18.804397] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:00.935 [2024-10-15 09:09:18.804456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:00.935 [2024-10-15 09:09:18.804611] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:00.935 [2024-10-15 09:09:18.804651] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:00.935 [2024-10-15 09:09:18.804948] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:00.935 [2024-10-15 09:09:18.805135] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:00.935 [2024-10-15 09:09:18.805175] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:00.935 [2024-10-15 09:09:18.805352] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.935 pt3 00:10:00.935 09:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.935 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:00.935 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:00.935 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:00.935 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.935 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.935 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:00.935 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.935 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.935 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.935 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.935 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.935 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:00.935 09:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.935 09:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.193 09:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.193 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.193 "name": "raid_bdev1", 00:10:01.193 "uuid": "c9e21295-6108-4c30-906b-43149dd14b6d", 00:10:01.193 "strip_size_kb": 0, 00:10:01.193 "state": "online", 00:10:01.193 "raid_level": "raid1", 00:10:01.193 "superblock": true, 00:10:01.193 "num_base_bdevs": 3, 00:10:01.193 "num_base_bdevs_discovered": 2, 00:10:01.193 "num_base_bdevs_operational": 2, 00:10:01.193 "base_bdevs_list": [ 00:10:01.193 { 00:10:01.193 "name": null, 00:10:01.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.193 "is_configured": false, 00:10:01.193 "data_offset": 2048, 00:10:01.193 "data_size": 63488 00:10:01.193 }, 00:10:01.193 { 00:10:01.193 "name": "pt2", 00:10:01.193 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:01.193 "is_configured": true, 00:10:01.193 "data_offset": 2048, 00:10:01.193 "data_size": 63488 00:10:01.193 }, 00:10:01.193 { 00:10:01.193 "name": "pt3", 00:10:01.193 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:01.193 "is_configured": true, 00:10:01.193 "data_offset": 2048, 00:10:01.193 "data_size": 63488 00:10:01.193 } 00:10:01.193 ] 00:10:01.193 }' 00:10:01.193 09:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.193 09:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.451 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:01.451 09:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.451 09:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.451 [2024-10-15 09:09:19.254729] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:01.451 [2024-10-15 09:09:19.254787] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:01.451 [2024-10-15 09:09:19.254886] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:01.452 [2024-10-15 09:09:19.254952] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:01.452 [2024-10-15 09:09:19.254963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:01.452 09:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.452 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.452 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:01.452 09:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.452 09:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.452 09:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.452 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:01.452 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:01.452 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:01.452 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:01.452 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:01.452 09:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.452 09:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.452 09:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.452 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:01.452 09:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.452 09:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.452 [2024-10-15 09:09:19.330557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:01.452 [2024-10-15 09:09:19.330632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.452 [2024-10-15 09:09:19.330653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:01.452 [2024-10-15 09:09:19.330663] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.452 [2024-10-15 09:09:19.332886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.452 [2024-10-15 09:09:19.333018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:01.452 [2024-10-15 09:09:19.333111] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:01.452 [2024-10-15 09:09:19.333163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:01.452 [2024-10-15 09:09:19.333305] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:01.452 [2024-10-15 09:09:19.333316] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:01.452 [2024-10-15 09:09:19.333333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:01.452 [2024-10-15 09:09:19.333400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:01.452 pt1 00:10:01.452 09:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.452 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:01.452 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:01.452 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:01.452 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.452 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.452 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.452 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:01.452 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.452 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.452 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.452 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.452 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.452 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:01.452 09:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.452 09:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.740 09:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.740 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.740 "name": "raid_bdev1", 00:10:01.740 "uuid": "c9e21295-6108-4c30-906b-43149dd14b6d", 00:10:01.740 "strip_size_kb": 0, 00:10:01.740 "state": "configuring", 00:10:01.740 "raid_level": "raid1", 00:10:01.740 "superblock": true, 00:10:01.740 "num_base_bdevs": 3, 00:10:01.740 "num_base_bdevs_discovered": 1, 00:10:01.740 "num_base_bdevs_operational": 2, 00:10:01.740 "base_bdevs_list": [ 00:10:01.740 { 00:10:01.740 "name": null, 00:10:01.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.740 "is_configured": false, 00:10:01.740 "data_offset": 2048, 00:10:01.740 "data_size": 63488 00:10:01.740 }, 00:10:01.740 { 00:10:01.740 "name": "pt2", 00:10:01.740 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:01.740 "is_configured": true, 00:10:01.740 "data_offset": 2048, 00:10:01.740 "data_size": 63488 00:10:01.740 }, 00:10:01.740 { 00:10:01.740 "name": null, 00:10:01.740 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:01.740 "is_configured": false, 00:10:01.740 "data_offset": 2048, 00:10:01.740 "data_size": 63488 00:10:01.740 } 00:10:01.740 ] 00:10:01.740 }' 00:10:01.740 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.740 09:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.001 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:02.001 09:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.001 09:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.001 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:02.001 09:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.001 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:02.001 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:02.001 09:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.001 09:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.001 [2024-10-15 09:09:19.789838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:02.001 [2024-10-15 09:09:19.789920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.001 [2024-10-15 09:09:19.789942] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:02.001 [2024-10-15 09:09:19.789951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.001 [2024-10-15 09:09:19.790413] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.001 [2024-10-15 09:09:19.790431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:02.001 [2024-10-15 09:09:19.790517] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:02.001 [2024-10-15 09:09:19.790562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:02.001 [2024-10-15 09:09:19.790727] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:02.001 [2024-10-15 09:09:19.790737] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:02.001 [2024-10-15 09:09:19.790995] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:02.001 [2024-10-15 09:09:19.791247] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:02.001 [2024-10-15 09:09:19.791265] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:02.001 [2024-10-15 09:09:19.791399] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:02.001 pt3 00:10:02.001 09:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.001 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:02.001 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:02.001 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:02.001 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.001 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.002 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:02.002 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.002 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.002 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.002 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.002 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.002 09:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.002 09:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.002 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:02.002 09:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.002 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.002 "name": "raid_bdev1", 00:10:02.002 "uuid": "c9e21295-6108-4c30-906b-43149dd14b6d", 00:10:02.002 "strip_size_kb": 0, 00:10:02.002 "state": "online", 00:10:02.002 "raid_level": "raid1", 00:10:02.002 "superblock": true, 00:10:02.002 "num_base_bdevs": 3, 00:10:02.002 "num_base_bdevs_discovered": 2, 00:10:02.002 "num_base_bdevs_operational": 2, 00:10:02.002 "base_bdevs_list": [ 00:10:02.002 { 00:10:02.002 "name": null, 00:10:02.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.002 "is_configured": false, 00:10:02.002 "data_offset": 2048, 00:10:02.002 "data_size": 63488 00:10:02.002 }, 00:10:02.002 { 00:10:02.002 "name": "pt2", 00:10:02.002 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:02.002 "is_configured": true, 00:10:02.002 "data_offset": 2048, 00:10:02.002 "data_size": 63488 00:10:02.002 }, 00:10:02.002 { 00:10:02.002 "name": "pt3", 00:10:02.002 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:02.002 "is_configured": true, 00:10:02.002 "data_offset": 2048, 00:10:02.002 "data_size": 63488 00:10:02.002 } 00:10:02.002 ] 00:10:02.002 }' 00:10:02.002 09:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.002 09:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.571 09:09:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:02.571 09:09:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:02.571 09:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.571 09:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.571 09:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.571 09:09:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:02.571 09:09:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:02.571 09:09:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:02.571 09:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.571 09:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.571 [2024-10-15 09:09:20.357242] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:02.571 09:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.571 09:09:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' c9e21295-6108-4c30-906b-43149dd14b6d '!=' c9e21295-6108-4c30-906b-43149dd14b6d ']' 00:10:02.571 09:09:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68734 00:10:02.571 09:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 68734 ']' 00:10:02.571 09:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 68734 00:10:02.571 09:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:02.571 09:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:02.571 09:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68734 00:10:02.571 09:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:02.571 09:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:02.571 09:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68734' 00:10:02.571 killing process with pid 68734 00:10:02.571 09:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 68734 00:10:02.571 [2024-10-15 09:09:20.416208] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:02.571 [2024-10-15 09:09:20.416422] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.571 09:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 68734 00:10:02.571 [2024-10-15 09:09:20.416519] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:02.571 [2024-10-15 09:09:20.416571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:03.141 [2024-10-15 09:09:20.737399] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:04.080 09:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:04.080 00:10:04.080 real 0m7.930s 00:10:04.080 user 0m12.319s 00:10:04.080 sys 0m1.500s 00:10:04.080 09:09:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:04.080 09:09:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.080 ************************************ 00:10:04.080 END TEST raid_superblock_test 00:10:04.080 ************************************ 00:10:04.080 09:09:21 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:04.080 09:09:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:04.080 09:09:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:04.080 09:09:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:04.080 ************************************ 00:10:04.080 START TEST raid_read_error_test 00:10:04.080 ************************************ 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.d0End4ro9k 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69184 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69184 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 69184 ']' 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:04.080 09:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.340 [2024-10-15 09:09:22.057166] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:10:04.340 [2024-10-15 09:09:22.057414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69184 ] 00:10:04.340 [2024-10-15 09:09:22.223579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.599 [2024-10-15 09:09:22.345408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.866 [2024-10-15 09:09:22.546381] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.866 [2024-10-15 09:09:22.546429] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:05.146 09:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:05.146 09:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:05.146 09:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:05.146 09:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:05.146 09:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.146 09:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.146 BaseBdev1_malloc 00:10:05.146 09:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.146 09:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:05.146 09:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.146 09:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.146 true 00:10:05.146 09:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.146 09:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:05.146 09:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.146 09:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.146 [2024-10-15 09:09:22.997312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:05.146 [2024-10-15 09:09:22.997401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.146 [2024-10-15 09:09:22.997427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:05.146 [2024-10-15 09:09:22.997440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.146 [2024-10-15 09:09:22.999726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.146 [2024-10-15 09:09:22.999776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:05.146 BaseBdev1 00:10:05.146 09:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.146 09:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:05.146 09:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:05.146 09:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.146 09:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.406 BaseBdev2_malloc 00:10:05.406 09:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.406 09:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:05.406 09:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.406 09:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.406 true 00:10:05.406 09:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.406 09:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:05.406 09:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.406 09:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.406 [2024-10-15 09:09:23.068260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:05.406 [2024-10-15 09:09:23.068449] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.406 [2024-10-15 09:09:23.068478] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:05.406 [2024-10-15 09:09:23.068491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.406 [2024-10-15 09:09:23.071103] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.406 [2024-10-15 09:09:23.071159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:05.406 BaseBdev2 00:10:05.406 09:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.406 09:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:05.406 09:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:05.406 09:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.406 09:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.406 BaseBdev3_malloc 00:10:05.406 09:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.406 09:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:05.406 09:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.406 09:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.406 true 00:10:05.406 09:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.406 09:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:05.406 09:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.406 09:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.406 [2024-10-15 09:09:23.155706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:05.406 [2024-10-15 09:09:23.155851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.406 [2024-10-15 09:09:23.155871] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:05.406 [2024-10-15 09:09:23.155882] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.406 [2024-10-15 09:09:23.157923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.406 [2024-10-15 09:09:23.157967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:05.406 BaseBdev3 00:10:05.406 09:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.406 09:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:05.406 09:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.406 09:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.406 [2024-10-15 09:09:23.167748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:05.406 [2024-10-15 09:09:23.169483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:05.406 [2024-10-15 09:09:23.169558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:05.406 [2024-10-15 09:09:23.169760] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:05.407 [2024-10-15 09:09:23.169774] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:05.407 [2024-10-15 09:09:23.170006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:05.407 [2024-10-15 09:09:23.170180] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:05.407 [2024-10-15 09:09:23.170199] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:05.407 [2024-10-15 09:09:23.170342] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:05.407 09:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.407 09:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:05.407 09:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:05.407 09:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:05.407 09:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.407 09:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.407 09:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.407 09:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.407 09:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.407 09:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.407 09:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.407 09:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.407 09:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:05.407 09:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.407 09:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.407 09:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.407 09:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.407 "name": "raid_bdev1", 00:10:05.407 "uuid": "0db8a505-8536-4c3f-bf4f-4d40699a7f46", 00:10:05.407 "strip_size_kb": 0, 00:10:05.407 "state": "online", 00:10:05.407 "raid_level": "raid1", 00:10:05.407 "superblock": true, 00:10:05.407 "num_base_bdevs": 3, 00:10:05.407 "num_base_bdevs_discovered": 3, 00:10:05.407 "num_base_bdevs_operational": 3, 00:10:05.407 "base_bdevs_list": [ 00:10:05.407 { 00:10:05.407 "name": "BaseBdev1", 00:10:05.407 "uuid": "0bdd90b3-ebf6-5028-9631-f761e70dbad9", 00:10:05.407 "is_configured": true, 00:10:05.407 "data_offset": 2048, 00:10:05.407 "data_size": 63488 00:10:05.407 }, 00:10:05.407 { 00:10:05.407 "name": "BaseBdev2", 00:10:05.407 "uuid": "fafc4f06-651b-5cca-9ca2-2e52cdfffa2b", 00:10:05.407 "is_configured": true, 00:10:05.407 "data_offset": 2048, 00:10:05.407 "data_size": 63488 00:10:05.407 }, 00:10:05.407 { 00:10:05.407 "name": "BaseBdev3", 00:10:05.407 "uuid": "5ec3ed03-fa2c-50e1-9791-86eff69e8e01", 00:10:05.407 "is_configured": true, 00:10:05.407 "data_offset": 2048, 00:10:05.407 "data_size": 63488 00:10:05.407 } 00:10:05.407 ] 00:10:05.407 }' 00:10:05.407 09:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.407 09:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.976 09:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:05.976 09:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:05.976 [2024-10-15 09:09:23.748163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:06.913 09:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:06.913 09:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.913 09:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.913 09:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.913 09:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:06.913 09:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:06.913 09:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:06.913 09:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:06.913 09:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:06.913 09:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:06.913 09:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:06.913 09:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.913 09:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.913 09:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.913 09:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.913 09:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.913 09:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.913 09:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.913 09:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.913 09:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:06.913 09:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.913 09:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.913 09:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.913 09:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.914 "name": "raid_bdev1", 00:10:06.914 "uuid": "0db8a505-8536-4c3f-bf4f-4d40699a7f46", 00:10:06.914 "strip_size_kb": 0, 00:10:06.914 "state": "online", 00:10:06.914 "raid_level": "raid1", 00:10:06.914 "superblock": true, 00:10:06.914 "num_base_bdevs": 3, 00:10:06.914 "num_base_bdevs_discovered": 3, 00:10:06.914 "num_base_bdevs_operational": 3, 00:10:06.914 "base_bdevs_list": [ 00:10:06.914 { 00:10:06.914 "name": "BaseBdev1", 00:10:06.914 "uuid": "0bdd90b3-ebf6-5028-9631-f761e70dbad9", 00:10:06.914 "is_configured": true, 00:10:06.914 "data_offset": 2048, 00:10:06.914 "data_size": 63488 00:10:06.914 }, 00:10:06.914 { 00:10:06.914 "name": "BaseBdev2", 00:10:06.914 "uuid": "fafc4f06-651b-5cca-9ca2-2e52cdfffa2b", 00:10:06.914 "is_configured": true, 00:10:06.914 "data_offset": 2048, 00:10:06.914 "data_size": 63488 00:10:06.914 }, 00:10:06.914 { 00:10:06.914 "name": "BaseBdev3", 00:10:06.914 "uuid": "5ec3ed03-fa2c-50e1-9791-86eff69e8e01", 00:10:06.914 "is_configured": true, 00:10:06.914 "data_offset": 2048, 00:10:06.914 "data_size": 63488 00:10:06.914 } 00:10:06.914 ] 00:10:06.914 }' 00:10:06.914 09:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.914 09:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.482 09:09:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:07.482 09:09:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.482 09:09:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.482 [2024-10-15 09:09:25.117890] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:07.482 [2024-10-15 09:09:25.118044] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:07.482 [2024-10-15 09:09:25.120720] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:07.482 [2024-10-15 09:09:25.120835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:07.482 [2024-10-15 09:09:25.120955] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:07.482 [2024-10-15 09:09:25.120999] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:07.482 { 00:10:07.482 "results": [ 00:10:07.482 { 00:10:07.482 "job": "raid_bdev1", 00:10:07.482 "core_mask": "0x1", 00:10:07.482 "workload": "randrw", 00:10:07.482 "percentage": 50, 00:10:07.482 "status": "finished", 00:10:07.482 "queue_depth": 1, 00:10:07.482 "io_size": 131072, 00:10:07.482 "runtime": 1.370741, 00:10:07.482 "iops": 12488.135979006975, 00:10:07.482 "mibps": 1561.0169973758718, 00:10:07.482 "io_failed": 0, 00:10:07.482 "io_timeout": 0, 00:10:07.482 "avg_latency_us": 77.23713713851606, 00:10:07.482 "min_latency_us": 24.034934497816593, 00:10:07.482 "max_latency_us": 1473.844541484716 00:10:07.482 } 00:10:07.482 ], 00:10:07.482 "core_count": 1 00:10:07.482 } 00:10:07.482 09:09:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.482 09:09:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69184 00:10:07.482 09:09:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 69184 ']' 00:10:07.482 09:09:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 69184 00:10:07.482 09:09:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:07.482 09:09:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:07.482 09:09:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69184 00:10:07.482 09:09:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:07.482 killing process with pid 69184 00:10:07.482 09:09:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:07.482 09:09:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69184' 00:10:07.482 09:09:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 69184 00:10:07.482 [2024-10-15 09:09:25.159595] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:07.482 09:09:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 69184 00:10:07.741 [2024-10-15 09:09:25.400654] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:09.121 09:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.d0End4ro9k 00:10:09.121 09:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:09.121 09:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:09.121 ************************************ 00:10:09.121 END TEST raid_read_error_test 00:10:09.121 ************************************ 00:10:09.121 09:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:09.121 09:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:09.121 09:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:09.121 09:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:09.121 09:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:09.121 00:10:09.121 real 0m4.683s 00:10:09.121 user 0m5.579s 00:10:09.121 sys 0m0.609s 00:10:09.121 09:09:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:09.121 09:09:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.121 09:09:26 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:09.121 09:09:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:09.121 09:09:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:09.122 09:09:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:09.122 ************************************ 00:10:09.122 START TEST raid_write_error_test 00:10:09.122 ************************************ 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6Hb3C2Cav6 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69331 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69331 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 69331 ']' 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:09.122 09:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.122 [2024-10-15 09:09:26.823214] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:10:09.122 [2024-10-15 09:09:26.823509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69331 ] 00:10:09.122 [2024-10-15 09:09:26.984573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.381 [2024-10-15 09:09:27.102791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.640 [2024-10-15 09:09:27.317178] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.640 [2024-10-15 09:09:27.317235] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.900 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:09.900 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:09.900 09:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:09.900 09:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:09.900 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.900 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.900 BaseBdev1_malloc 00:10:09.900 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.900 09:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:09.900 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.900 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.900 true 00:10:09.900 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.900 09:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:09.900 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.900 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.900 [2024-10-15 09:09:27.728650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:09.900 [2024-10-15 09:09:27.728840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.900 [2024-10-15 09:09:27.728885] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:09.900 [2024-10-15 09:09:27.728920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.900 [2024-10-15 09:09:27.731131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.900 [2024-10-15 09:09:27.731222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:09.900 BaseBdev1 00:10:09.900 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.900 09:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:09.900 09:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:09.900 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.900 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.900 BaseBdev2_malloc 00:10:09.900 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.900 09:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:09.900 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.900 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.900 true 00:10:09.900 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.900 09:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:09.900 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.900 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.162 [2024-10-15 09:09:27.795318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:10.162 [2024-10-15 09:09:27.795388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.162 [2024-10-15 09:09:27.795404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:10.162 [2024-10-15 09:09:27.795415] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.162 [2024-10-15 09:09:27.797488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.162 [2024-10-15 09:09:27.797533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:10.162 BaseBdev2 00:10:10.162 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.162 09:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:10.162 09:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:10.162 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.162 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.162 BaseBdev3_malloc 00:10:10.162 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.162 09:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:10.162 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.162 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.162 true 00:10:10.162 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.162 09:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:10.162 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.162 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.162 [2024-10-15 09:09:27.875767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:10.162 [2024-10-15 09:09:27.875921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.162 [2024-10-15 09:09:27.875943] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:10.162 [2024-10-15 09:09:27.875954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.162 [2024-10-15 09:09:27.878085] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.162 [2024-10-15 09:09:27.878127] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:10.162 BaseBdev3 00:10:10.162 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.162 09:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:10.162 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.162 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.162 [2024-10-15 09:09:27.887817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:10.162 [2024-10-15 09:09:27.889597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:10.162 [2024-10-15 09:09:27.889686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:10.162 [2024-10-15 09:09:27.889902] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:10.162 [2024-10-15 09:09:27.889915] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:10.162 [2024-10-15 09:09:27.890149] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:10.162 [2024-10-15 09:09:27.890320] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:10.162 [2024-10-15 09:09:27.890332] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:10.162 [2024-10-15 09:09:27.890480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.163 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.163 09:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:10.163 09:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:10.163 09:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.163 09:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.163 09:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.163 09:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.163 09:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.163 09:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.163 09:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.163 09:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.163 09:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.163 09:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:10.163 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.163 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.163 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.163 09:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.163 "name": "raid_bdev1", 00:10:10.163 "uuid": "c47f2b9c-3e8a-4ffa-92b2-094b7d5bb9f4", 00:10:10.163 "strip_size_kb": 0, 00:10:10.163 "state": "online", 00:10:10.163 "raid_level": "raid1", 00:10:10.163 "superblock": true, 00:10:10.163 "num_base_bdevs": 3, 00:10:10.163 "num_base_bdevs_discovered": 3, 00:10:10.163 "num_base_bdevs_operational": 3, 00:10:10.163 "base_bdevs_list": [ 00:10:10.163 { 00:10:10.163 "name": "BaseBdev1", 00:10:10.163 "uuid": "6234c991-44f2-5a69-aabd-7ebe9467606b", 00:10:10.163 "is_configured": true, 00:10:10.163 "data_offset": 2048, 00:10:10.163 "data_size": 63488 00:10:10.163 }, 00:10:10.163 { 00:10:10.163 "name": "BaseBdev2", 00:10:10.163 "uuid": "3cb222a0-7e31-5a5c-8c63-896a3be858ff", 00:10:10.163 "is_configured": true, 00:10:10.163 "data_offset": 2048, 00:10:10.163 "data_size": 63488 00:10:10.163 }, 00:10:10.163 { 00:10:10.163 "name": "BaseBdev3", 00:10:10.163 "uuid": "3e6045de-ccdc-5cdb-b856-30f0d10ad9d8", 00:10:10.163 "is_configured": true, 00:10:10.163 "data_offset": 2048, 00:10:10.163 "data_size": 63488 00:10:10.163 } 00:10:10.163 ] 00:10:10.163 }' 00:10:10.163 09:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.163 09:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.737 09:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:10.737 09:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:10.737 [2024-10-15 09:09:28.408373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:11.672 09:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:11.672 09:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.672 09:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.672 [2024-10-15 09:09:29.336211] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:11.672 [2024-10-15 09:09:29.336399] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:11.672 [2024-10-15 09:09:29.336654] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005fb0 00:10:11.672 09:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.672 09:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:11.672 09:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:11.672 09:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:11.672 09:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:11.672 09:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:11.672 09:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:11.672 09:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.672 09:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.672 09:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.672 09:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:11.672 09:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.672 09:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.672 09:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.672 09:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.672 09:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.672 09:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.672 09:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.672 09:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.672 09:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.672 09:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.672 "name": "raid_bdev1", 00:10:11.672 "uuid": "c47f2b9c-3e8a-4ffa-92b2-094b7d5bb9f4", 00:10:11.672 "strip_size_kb": 0, 00:10:11.672 "state": "online", 00:10:11.672 "raid_level": "raid1", 00:10:11.672 "superblock": true, 00:10:11.672 "num_base_bdevs": 3, 00:10:11.672 "num_base_bdevs_discovered": 2, 00:10:11.672 "num_base_bdevs_operational": 2, 00:10:11.672 "base_bdevs_list": [ 00:10:11.672 { 00:10:11.672 "name": null, 00:10:11.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.672 "is_configured": false, 00:10:11.672 "data_offset": 0, 00:10:11.672 "data_size": 63488 00:10:11.672 }, 00:10:11.672 { 00:10:11.672 "name": "BaseBdev2", 00:10:11.672 "uuid": "3cb222a0-7e31-5a5c-8c63-896a3be858ff", 00:10:11.672 "is_configured": true, 00:10:11.672 "data_offset": 2048, 00:10:11.672 "data_size": 63488 00:10:11.672 }, 00:10:11.672 { 00:10:11.672 "name": "BaseBdev3", 00:10:11.672 "uuid": "3e6045de-ccdc-5cdb-b856-30f0d10ad9d8", 00:10:11.672 "is_configured": true, 00:10:11.672 "data_offset": 2048, 00:10:11.672 "data_size": 63488 00:10:11.672 } 00:10:11.672 ] 00:10:11.672 }' 00:10:11.672 09:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.672 09:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.930 09:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:11.930 09:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.930 09:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.930 [2024-10-15 09:09:29.799115] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:11.930 [2024-10-15 09:09:29.799168] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:11.930 [2024-10-15 09:09:29.802166] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.930 [2024-10-15 09:09:29.802239] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:11.930 [2024-10-15 09:09:29.802328] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:11.930 [2024-10-15 09:09:29.802342] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:11.930 { 00:10:11.930 "results": [ 00:10:11.930 { 00:10:11.930 "job": "raid_bdev1", 00:10:11.930 "core_mask": "0x1", 00:10:11.930 "workload": "randrw", 00:10:11.930 "percentage": 50, 00:10:11.930 "status": "finished", 00:10:11.930 "queue_depth": 1, 00:10:11.930 "io_size": 131072, 00:10:11.930 "runtime": 1.391367, 00:10:11.930 "iops": 13933.77879452366, 00:10:11.930 "mibps": 1741.7223493154574, 00:10:11.930 "io_failed": 0, 00:10:11.930 "io_timeout": 0, 00:10:11.930 "avg_latency_us": 69.03119620742572, 00:10:11.930 "min_latency_us": 23.699563318777294, 00:10:11.930 "max_latency_us": 1724.2550218340612 00:10:11.930 } 00:10:11.930 ], 00:10:11.930 "core_count": 1 00:10:11.930 } 00:10:11.930 09:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.930 09:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69331 00:10:11.930 09:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 69331 ']' 00:10:11.930 09:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 69331 00:10:11.930 09:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:11.930 09:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:11.930 09:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69331 00:10:12.188 09:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:12.188 09:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:12.188 09:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69331' 00:10:12.188 killing process with pid 69331 00:10:12.188 09:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 69331 00:10:12.188 [2024-10-15 09:09:29.852847] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:12.188 09:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 69331 00:10:12.446 [2024-10-15 09:09:30.096983] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:13.824 09:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6Hb3C2Cav6 00:10:13.824 09:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:13.824 09:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:13.824 09:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:13.824 09:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:13.824 09:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:13.824 09:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:13.824 09:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:13.824 00:10:13.824 real 0m4.640s 00:10:13.824 user 0m5.455s 00:10:13.824 sys 0m0.619s 00:10:13.824 ************************************ 00:10:13.824 END TEST raid_write_error_test 00:10:13.824 ************************************ 00:10:13.824 09:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:13.824 09:09:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.824 09:09:31 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:13.824 09:09:31 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:13.824 09:09:31 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:13.824 09:09:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:13.824 09:09:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:13.824 09:09:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:13.824 ************************************ 00:10:13.824 START TEST raid_state_function_test 00:10:13.824 ************************************ 00:10:13.824 09:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:10:13.824 09:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:13.824 09:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:13.824 09:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:13.824 09:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:13.824 09:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:13.824 09:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:13.824 09:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:13.824 09:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:13.824 09:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:13.824 09:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:13.824 09:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:13.824 09:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:13.824 09:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:13.824 09:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:13.824 09:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:13.824 09:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:13.824 09:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:13.824 09:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:13.824 09:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:13.824 09:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:13.824 09:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:13.824 09:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:13.824 09:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:13.824 09:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:13.825 09:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:13.825 09:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:13.825 09:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:13.825 09:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:13.825 09:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:13.825 09:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69469 00:10:13.825 09:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:13.825 09:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69469' 00:10:13.825 Process raid pid: 69469 00:10:13.825 09:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69469 00:10:13.825 09:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 69469 ']' 00:10:13.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.825 09:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.825 09:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:13.825 09:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.825 09:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:13.825 09:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.825 [2024-10-15 09:09:31.521097] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:10:13.825 [2024-10-15 09:09:31.521252] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.825 [2024-10-15 09:09:31.693631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.083 [2024-10-15 09:09:31.816183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.341 [2024-10-15 09:09:32.037446] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:14.341 [2024-10-15 09:09:32.037498] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:14.654 09:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:14.654 09:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:14.654 09:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:14.654 09:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.654 09:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.654 [2024-10-15 09:09:32.391361] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:14.654 [2024-10-15 09:09:32.391557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:14.654 [2024-10-15 09:09:32.391590] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:14.654 [2024-10-15 09:09:32.391617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:14.654 [2024-10-15 09:09:32.391637] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:14.654 [2024-10-15 09:09:32.391659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:14.654 [2024-10-15 09:09:32.391697] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:14.654 [2024-10-15 09:09:32.391723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:14.654 09:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.654 09:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:14.654 09:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.654 09:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.654 09:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.654 09:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.654 09:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.654 09:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.654 09:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.654 09:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.654 09:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.654 09:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.654 09:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.654 09:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.654 09:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.654 09:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.654 09:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.654 "name": "Existed_Raid", 00:10:14.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.654 "strip_size_kb": 64, 00:10:14.654 "state": "configuring", 00:10:14.654 "raid_level": "raid0", 00:10:14.654 "superblock": false, 00:10:14.654 "num_base_bdevs": 4, 00:10:14.654 "num_base_bdevs_discovered": 0, 00:10:14.654 "num_base_bdevs_operational": 4, 00:10:14.654 "base_bdevs_list": [ 00:10:14.654 { 00:10:14.654 "name": "BaseBdev1", 00:10:14.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.654 "is_configured": false, 00:10:14.654 "data_offset": 0, 00:10:14.654 "data_size": 0 00:10:14.654 }, 00:10:14.654 { 00:10:14.654 "name": "BaseBdev2", 00:10:14.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.654 "is_configured": false, 00:10:14.654 "data_offset": 0, 00:10:14.654 "data_size": 0 00:10:14.654 }, 00:10:14.654 { 00:10:14.654 "name": "BaseBdev3", 00:10:14.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.654 "is_configured": false, 00:10:14.654 "data_offset": 0, 00:10:14.654 "data_size": 0 00:10:14.654 }, 00:10:14.654 { 00:10:14.654 "name": "BaseBdev4", 00:10:14.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.654 "is_configured": false, 00:10:14.654 "data_offset": 0, 00:10:14.654 "data_size": 0 00:10:14.654 } 00:10:14.654 ] 00:10:14.654 }' 00:10:14.654 09:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.654 09:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.221 09:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:15.221 09:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.221 09:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.221 [2024-10-15 09:09:32.866458] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:15.221 [2024-10-15 09:09:32.866514] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:15.221 09:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.222 [2024-10-15 09:09:32.878408] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:15.222 [2024-10-15 09:09:32.878457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:15.222 [2024-10-15 09:09:32.878467] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:15.222 [2024-10-15 09:09:32.878477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:15.222 [2024-10-15 09:09:32.878483] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:15.222 [2024-10-15 09:09:32.878492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:15.222 [2024-10-15 09:09:32.878498] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:15.222 [2024-10-15 09:09:32.878508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.222 [2024-10-15 09:09:32.930342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:15.222 BaseBdev1 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.222 [ 00:10:15.222 { 00:10:15.222 "name": "BaseBdev1", 00:10:15.222 "aliases": [ 00:10:15.222 "fddabe11-e5a5-49f2-a002-5a8639161f66" 00:10:15.222 ], 00:10:15.222 "product_name": "Malloc disk", 00:10:15.222 "block_size": 512, 00:10:15.222 "num_blocks": 65536, 00:10:15.222 "uuid": "fddabe11-e5a5-49f2-a002-5a8639161f66", 00:10:15.222 "assigned_rate_limits": { 00:10:15.222 "rw_ios_per_sec": 0, 00:10:15.222 "rw_mbytes_per_sec": 0, 00:10:15.222 "r_mbytes_per_sec": 0, 00:10:15.222 "w_mbytes_per_sec": 0 00:10:15.222 }, 00:10:15.222 "claimed": true, 00:10:15.222 "claim_type": "exclusive_write", 00:10:15.222 "zoned": false, 00:10:15.222 "supported_io_types": { 00:10:15.222 "read": true, 00:10:15.222 "write": true, 00:10:15.222 "unmap": true, 00:10:15.222 "flush": true, 00:10:15.222 "reset": true, 00:10:15.222 "nvme_admin": false, 00:10:15.222 "nvme_io": false, 00:10:15.222 "nvme_io_md": false, 00:10:15.222 "write_zeroes": true, 00:10:15.222 "zcopy": true, 00:10:15.222 "get_zone_info": false, 00:10:15.222 "zone_management": false, 00:10:15.222 "zone_append": false, 00:10:15.222 "compare": false, 00:10:15.222 "compare_and_write": false, 00:10:15.222 "abort": true, 00:10:15.222 "seek_hole": false, 00:10:15.222 "seek_data": false, 00:10:15.222 "copy": true, 00:10:15.222 "nvme_iov_md": false 00:10:15.222 }, 00:10:15.222 "memory_domains": [ 00:10:15.222 { 00:10:15.222 "dma_device_id": "system", 00:10:15.222 "dma_device_type": 1 00:10:15.222 }, 00:10:15.222 { 00:10:15.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.222 "dma_device_type": 2 00:10:15.222 } 00:10:15.222 ], 00:10:15.222 "driver_specific": {} 00:10:15.222 } 00:10:15.222 ] 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.222 09:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.222 09:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.222 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.222 "name": "Existed_Raid", 00:10:15.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.222 "strip_size_kb": 64, 00:10:15.222 "state": "configuring", 00:10:15.222 "raid_level": "raid0", 00:10:15.222 "superblock": false, 00:10:15.222 "num_base_bdevs": 4, 00:10:15.222 "num_base_bdevs_discovered": 1, 00:10:15.222 "num_base_bdevs_operational": 4, 00:10:15.222 "base_bdevs_list": [ 00:10:15.222 { 00:10:15.222 "name": "BaseBdev1", 00:10:15.222 "uuid": "fddabe11-e5a5-49f2-a002-5a8639161f66", 00:10:15.222 "is_configured": true, 00:10:15.222 "data_offset": 0, 00:10:15.222 "data_size": 65536 00:10:15.222 }, 00:10:15.222 { 00:10:15.222 "name": "BaseBdev2", 00:10:15.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.222 "is_configured": false, 00:10:15.222 "data_offset": 0, 00:10:15.222 "data_size": 0 00:10:15.222 }, 00:10:15.222 { 00:10:15.222 "name": "BaseBdev3", 00:10:15.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.222 "is_configured": false, 00:10:15.222 "data_offset": 0, 00:10:15.222 "data_size": 0 00:10:15.222 }, 00:10:15.222 { 00:10:15.222 "name": "BaseBdev4", 00:10:15.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.222 "is_configured": false, 00:10:15.222 "data_offset": 0, 00:10:15.222 "data_size": 0 00:10:15.222 } 00:10:15.222 ] 00:10:15.222 }' 00:10:15.222 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.222 09:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.790 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:15.790 09:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.790 09:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.790 [2024-10-15 09:09:33.433594] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:15.790 [2024-10-15 09:09:33.433772] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:15.790 09:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.790 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:15.790 09:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.790 09:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.790 [2024-10-15 09:09:33.445639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:15.790 [2024-10-15 09:09:33.447704] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:15.790 [2024-10-15 09:09:33.447760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:15.790 [2024-10-15 09:09:33.447770] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:15.790 [2024-10-15 09:09:33.447781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:15.790 [2024-10-15 09:09:33.447788] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:15.790 [2024-10-15 09:09:33.447797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:15.790 09:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.790 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:15.790 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:15.790 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:15.791 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.791 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.791 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.791 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.791 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.791 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.791 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.791 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.791 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.791 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.791 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.791 09:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.791 09:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.791 09:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.791 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.791 "name": "Existed_Raid", 00:10:15.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.791 "strip_size_kb": 64, 00:10:15.791 "state": "configuring", 00:10:15.791 "raid_level": "raid0", 00:10:15.791 "superblock": false, 00:10:15.791 "num_base_bdevs": 4, 00:10:15.791 "num_base_bdevs_discovered": 1, 00:10:15.791 "num_base_bdevs_operational": 4, 00:10:15.791 "base_bdevs_list": [ 00:10:15.791 { 00:10:15.791 "name": "BaseBdev1", 00:10:15.791 "uuid": "fddabe11-e5a5-49f2-a002-5a8639161f66", 00:10:15.791 "is_configured": true, 00:10:15.791 "data_offset": 0, 00:10:15.791 "data_size": 65536 00:10:15.791 }, 00:10:15.791 { 00:10:15.791 "name": "BaseBdev2", 00:10:15.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.791 "is_configured": false, 00:10:15.791 "data_offset": 0, 00:10:15.791 "data_size": 0 00:10:15.791 }, 00:10:15.791 { 00:10:15.791 "name": "BaseBdev3", 00:10:15.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.791 "is_configured": false, 00:10:15.791 "data_offset": 0, 00:10:15.791 "data_size": 0 00:10:15.791 }, 00:10:15.791 { 00:10:15.791 "name": "BaseBdev4", 00:10:15.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.791 "is_configured": false, 00:10:15.791 "data_offset": 0, 00:10:15.791 "data_size": 0 00:10:15.791 } 00:10:15.791 ] 00:10:15.791 }' 00:10:15.791 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.791 09:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.051 [2024-10-15 09:09:33.892509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:16.051 BaseBdev2 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.051 [ 00:10:16.051 { 00:10:16.051 "name": "BaseBdev2", 00:10:16.051 "aliases": [ 00:10:16.051 "e83bc882-0f0b-44ca-a0cf-b648473733ae" 00:10:16.051 ], 00:10:16.051 "product_name": "Malloc disk", 00:10:16.051 "block_size": 512, 00:10:16.051 "num_blocks": 65536, 00:10:16.051 "uuid": "e83bc882-0f0b-44ca-a0cf-b648473733ae", 00:10:16.051 "assigned_rate_limits": { 00:10:16.051 "rw_ios_per_sec": 0, 00:10:16.051 "rw_mbytes_per_sec": 0, 00:10:16.051 "r_mbytes_per_sec": 0, 00:10:16.051 "w_mbytes_per_sec": 0 00:10:16.051 }, 00:10:16.051 "claimed": true, 00:10:16.051 "claim_type": "exclusive_write", 00:10:16.051 "zoned": false, 00:10:16.051 "supported_io_types": { 00:10:16.051 "read": true, 00:10:16.051 "write": true, 00:10:16.051 "unmap": true, 00:10:16.051 "flush": true, 00:10:16.051 "reset": true, 00:10:16.051 "nvme_admin": false, 00:10:16.051 "nvme_io": false, 00:10:16.051 "nvme_io_md": false, 00:10:16.051 "write_zeroes": true, 00:10:16.051 "zcopy": true, 00:10:16.051 "get_zone_info": false, 00:10:16.051 "zone_management": false, 00:10:16.051 "zone_append": false, 00:10:16.051 "compare": false, 00:10:16.051 "compare_and_write": false, 00:10:16.051 "abort": true, 00:10:16.051 "seek_hole": false, 00:10:16.051 "seek_data": false, 00:10:16.051 "copy": true, 00:10:16.051 "nvme_iov_md": false 00:10:16.051 }, 00:10:16.051 "memory_domains": [ 00:10:16.051 { 00:10:16.051 "dma_device_id": "system", 00:10:16.051 "dma_device_type": 1 00:10:16.051 }, 00:10:16.051 { 00:10:16.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.051 "dma_device_type": 2 00:10:16.051 } 00:10:16.051 ], 00:10:16.051 "driver_specific": {} 00:10:16.051 } 00:10:16.051 ] 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.051 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.312 09:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.312 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.312 "name": "Existed_Raid", 00:10:16.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.312 "strip_size_kb": 64, 00:10:16.312 "state": "configuring", 00:10:16.312 "raid_level": "raid0", 00:10:16.312 "superblock": false, 00:10:16.312 "num_base_bdevs": 4, 00:10:16.312 "num_base_bdevs_discovered": 2, 00:10:16.312 "num_base_bdevs_operational": 4, 00:10:16.312 "base_bdevs_list": [ 00:10:16.312 { 00:10:16.312 "name": "BaseBdev1", 00:10:16.312 "uuid": "fddabe11-e5a5-49f2-a002-5a8639161f66", 00:10:16.312 "is_configured": true, 00:10:16.312 "data_offset": 0, 00:10:16.312 "data_size": 65536 00:10:16.312 }, 00:10:16.312 { 00:10:16.312 "name": "BaseBdev2", 00:10:16.312 "uuid": "e83bc882-0f0b-44ca-a0cf-b648473733ae", 00:10:16.312 "is_configured": true, 00:10:16.312 "data_offset": 0, 00:10:16.312 "data_size": 65536 00:10:16.312 }, 00:10:16.312 { 00:10:16.312 "name": "BaseBdev3", 00:10:16.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.312 "is_configured": false, 00:10:16.312 "data_offset": 0, 00:10:16.312 "data_size": 0 00:10:16.312 }, 00:10:16.312 { 00:10:16.312 "name": "BaseBdev4", 00:10:16.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.312 "is_configured": false, 00:10:16.312 "data_offset": 0, 00:10:16.312 "data_size": 0 00:10:16.312 } 00:10:16.312 ] 00:10:16.312 }' 00:10:16.312 09:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.312 09:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.571 09:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:16.571 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.571 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.571 [2024-10-15 09:09:34.418844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:16.571 BaseBdev3 00:10:16.571 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.571 09:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:16.571 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:16.571 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:16.571 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:16.571 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:16.571 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:16.571 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:16.571 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.571 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.571 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.571 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:16.571 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.571 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.571 [ 00:10:16.571 { 00:10:16.571 "name": "BaseBdev3", 00:10:16.571 "aliases": [ 00:10:16.571 "d5ab75e7-8a8d-4f84-90b3-33196a4a4a49" 00:10:16.571 ], 00:10:16.571 "product_name": "Malloc disk", 00:10:16.571 "block_size": 512, 00:10:16.571 "num_blocks": 65536, 00:10:16.571 "uuid": "d5ab75e7-8a8d-4f84-90b3-33196a4a4a49", 00:10:16.571 "assigned_rate_limits": { 00:10:16.571 "rw_ios_per_sec": 0, 00:10:16.571 "rw_mbytes_per_sec": 0, 00:10:16.571 "r_mbytes_per_sec": 0, 00:10:16.571 "w_mbytes_per_sec": 0 00:10:16.571 }, 00:10:16.571 "claimed": true, 00:10:16.571 "claim_type": "exclusive_write", 00:10:16.571 "zoned": false, 00:10:16.571 "supported_io_types": { 00:10:16.571 "read": true, 00:10:16.571 "write": true, 00:10:16.571 "unmap": true, 00:10:16.571 "flush": true, 00:10:16.571 "reset": true, 00:10:16.571 "nvme_admin": false, 00:10:16.571 "nvme_io": false, 00:10:16.571 "nvme_io_md": false, 00:10:16.571 "write_zeroes": true, 00:10:16.571 "zcopy": true, 00:10:16.571 "get_zone_info": false, 00:10:16.571 "zone_management": false, 00:10:16.571 "zone_append": false, 00:10:16.571 "compare": false, 00:10:16.571 "compare_and_write": false, 00:10:16.571 "abort": true, 00:10:16.571 "seek_hole": false, 00:10:16.571 "seek_data": false, 00:10:16.571 "copy": true, 00:10:16.571 "nvme_iov_md": false 00:10:16.571 }, 00:10:16.571 "memory_domains": [ 00:10:16.571 { 00:10:16.571 "dma_device_id": "system", 00:10:16.571 "dma_device_type": 1 00:10:16.571 }, 00:10:16.571 { 00:10:16.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.571 "dma_device_type": 2 00:10:16.571 } 00:10:16.571 ], 00:10:16.571 "driver_specific": {} 00:10:16.571 } 00:10:16.571 ] 00:10:16.571 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.571 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:16.571 09:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:16.572 09:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:16.572 09:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:16.572 09:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.572 09:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.572 09:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.572 09:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.572 09:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.572 09:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.572 09:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.572 09:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.572 09:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.830 09:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.830 09:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.830 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.830 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.830 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.830 09:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.830 "name": "Existed_Raid", 00:10:16.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.830 "strip_size_kb": 64, 00:10:16.830 "state": "configuring", 00:10:16.830 "raid_level": "raid0", 00:10:16.830 "superblock": false, 00:10:16.830 "num_base_bdevs": 4, 00:10:16.830 "num_base_bdevs_discovered": 3, 00:10:16.830 "num_base_bdevs_operational": 4, 00:10:16.830 "base_bdevs_list": [ 00:10:16.830 { 00:10:16.830 "name": "BaseBdev1", 00:10:16.830 "uuid": "fddabe11-e5a5-49f2-a002-5a8639161f66", 00:10:16.830 "is_configured": true, 00:10:16.830 "data_offset": 0, 00:10:16.830 "data_size": 65536 00:10:16.830 }, 00:10:16.830 { 00:10:16.830 "name": "BaseBdev2", 00:10:16.830 "uuid": "e83bc882-0f0b-44ca-a0cf-b648473733ae", 00:10:16.830 "is_configured": true, 00:10:16.830 "data_offset": 0, 00:10:16.830 "data_size": 65536 00:10:16.830 }, 00:10:16.830 { 00:10:16.830 "name": "BaseBdev3", 00:10:16.830 "uuid": "d5ab75e7-8a8d-4f84-90b3-33196a4a4a49", 00:10:16.830 "is_configured": true, 00:10:16.830 "data_offset": 0, 00:10:16.830 "data_size": 65536 00:10:16.830 }, 00:10:16.830 { 00:10:16.830 "name": "BaseBdev4", 00:10:16.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.830 "is_configured": false, 00:10:16.830 "data_offset": 0, 00:10:16.830 "data_size": 0 00:10:16.830 } 00:10:16.830 ] 00:10:16.830 }' 00:10:16.830 09:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.830 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.089 [2024-10-15 09:09:34.924126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:17.089 [2024-10-15 09:09:34.924188] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:17.089 [2024-10-15 09:09:34.924198] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:17.089 [2024-10-15 09:09:34.924470] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:17.089 [2024-10-15 09:09:34.924657] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:17.089 [2024-10-15 09:09:34.924671] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:17.089 [2024-10-15 09:09:34.925014] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.089 BaseBdev4 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.089 [ 00:10:17.089 { 00:10:17.089 "name": "BaseBdev4", 00:10:17.089 "aliases": [ 00:10:17.089 "e3f3df2a-a50d-40ab-b61f-3584c930efae" 00:10:17.089 ], 00:10:17.089 "product_name": "Malloc disk", 00:10:17.089 "block_size": 512, 00:10:17.089 "num_blocks": 65536, 00:10:17.089 "uuid": "e3f3df2a-a50d-40ab-b61f-3584c930efae", 00:10:17.089 "assigned_rate_limits": { 00:10:17.089 "rw_ios_per_sec": 0, 00:10:17.089 "rw_mbytes_per_sec": 0, 00:10:17.089 "r_mbytes_per_sec": 0, 00:10:17.089 "w_mbytes_per_sec": 0 00:10:17.089 }, 00:10:17.089 "claimed": true, 00:10:17.089 "claim_type": "exclusive_write", 00:10:17.089 "zoned": false, 00:10:17.089 "supported_io_types": { 00:10:17.089 "read": true, 00:10:17.089 "write": true, 00:10:17.089 "unmap": true, 00:10:17.089 "flush": true, 00:10:17.089 "reset": true, 00:10:17.089 "nvme_admin": false, 00:10:17.089 "nvme_io": false, 00:10:17.089 "nvme_io_md": false, 00:10:17.089 "write_zeroes": true, 00:10:17.089 "zcopy": true, 00:10:17.089 "get_zone_info": false, 00:10:17.089 "zone_management": false, 00:10:17.089 "zone_append": false, 00:10:17.089 "compare": false, 00:10:17.089 "compare_and_write": false, 00:10:17.089 "abort": true, 00:10:17.089 "seek_hole": false, 00:10:17.089 "seek_data": false, 00:10:17.089 "copy": true, 00:10:17.089 "nvme_iov_md": false 00:10:17.089 }, 00:10:17.089 "memory_domains": [ 00:10:17.089 { 00:10:17.089 "dma_device_id": "system", 00:10:17.089 "dma_device_type": 1 00:10:17.089 }, 00:10:17.089 { 00:10:17.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.089 "dma_device_type": 2 00:10:17.089 } 00:10:17.089 ], 00:10:17.089 "driver_specific": {} 00:10:17.089 } 00:10:17.089 ] 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.089 09:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.347 09:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.347 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.347 "name": "Existed_Raid", 00:10:17.347 "uuid": "af2df7a5-53d2-4c55-89fb-a4b0b64739d4", 00:10:17.347 "strip_size_kb": 64, 00:10:17.347 "state": "online", 00:10:17.347 "raid_level": "raid0", 00:10:17.347 "superblock": false, 00:10:17.347 "num_base_bdevs": 4, 00:10:17.347 "num_base_bdevs_discovered": 4, 00:10:17.347 "num_base_bdevs_operational": 4, 00:10:17.347 "base_bdevs_list": [ 00:10:17.347 { 00:10:17.347 "name": "BaseBdev1", 00:10:17.347 "uuid": "fddabe11-e5a5-49f2-a002-5a8639161f66", 00:10:17.347 "is_configured": true, 00:10:17.347 "data_offset": 0, 00:10:17.347 "data_size": 65536 00:10:17.347 }, 00:10:17.347 { 00:10:17.347 "name": "BaseBdev2", 00:10:17.347 "uuid": "e83bc882-0f0b-44ca-a0cf-b648473733ae", 00:10:17.347 "is_configured": true, 00:10:17.347 "data_offset": 0, 00:10:17.347 "data_size": 65536 00:10:17.347 }, 00:10:17.347 { 00:10:17.347 "name": "BaseBdev3", 00:10:17.347 "uuid": "d5ab75e7-8a8d-4f84-90b3-33196a4a4a49", 00:10:17.347 "is_configured": true, 00:10:17.347 "data_offset": 0, 00:10:17.347 "data_size": 65536 00:10:17.347 }, 00:10:17.347 { 00:10:17.347 "name": "BaseBdev4", 00:10:17.347 "uuid": "e3f3df2a-a50d-40ab-b61f-3584c930efae", 00:10:17.347 "is_configured": true, 00:10:17.347 "data_offset": 0, 00:10:17.347 "data_size": 65536 00:10:17.347 } 00:10:17.347 ] 00:10:17.347 }' 00:10:17.347 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.347 09:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.606 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:17.606 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:17.606 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:17.606 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:17.606 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:17.606 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:17.606 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:17.606 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:17.606 09:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.606 09:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.606 [2024-10-15 09:09:35.423767] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.606 09:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.606 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:17.606 "name": "Existed_Raid", 00:10:17.606 "aliases": [ 00:10:17.606 "af2df7a5-53d2-4c55-89fb-a4b0b64739d4" 00:10:17.606 ], 00:10:17.606 "product_name": "Raid Volume", 00:10:17.606 "block_size": 512, 00:10:17.606 "num_blocks": 262144, 00:10:17.606 "uuid": "af2df7a5-53d2-4c55-89fb-a4b0b64739d4", 00:10:17.606 "assigned_rate_limits": { 00:10:17.606 "rw_ios_per_sec": 0, 00:10:17.606 "rw_mbytes_per_sec": 0, 00:10:17.606 "r_mbytes_per_sec": 0, 00:10:17.606 "w_mbytes_per_sec": 0 00:10:17.606 }, 00:10:17.606 "claimed": false, 00:10:17.606 "zoned": false, 00:10:17.606 "supported_io_types": { 00:10:17.606 "read": true, 00:10:17.606 "write": true, 00:10:17.606 "unmap": true, 00:10:17.606 "flush": true, 00:10:17.606 "reset": true, 00:10:17.606 "nvme_admin": false, 00:10:17.606 "nvme_io": false, 00:10:17.606 "nvme_io_md": false, 00:10:17.606 "write_zeroes": true, 00:10:17.606 "zcopy": false, 00:10:17.606 "get_zone_info": false, 00:10:17.606 "zone_management": false, 00:10:17.606 "zone_append": false, 00:10:17.606 "compare": false, 00:10:17.606 "compare_and_write": false, 00:10:17.606 "abort": false, 00:10:17.606 "seek_hole": false, 00:10:17.606 "seek_data": false, 00:10:17.606 "copy": false, 00:10:17.606 "nvme_iov_md": false 00:10:17.606 }, 00:10:17.606 "memory_domains": [ 00:10:17.606 { 00:10:17.606 "dma_device_id": "system", 00:10:17.606 "dma_device_type": 1 00:10:17.606 }, 00:10:17.606 { 00:10:17.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.606 "dma_device_type": 2 00:10:17.606 }, 00:10:17.606 { 00:10:17.606 "dma_device_id": "system", 00:10:17.606 "dma_device_type": 1 00:10:17.606 }, 00:10:17.606 { 00:10:17.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.606 "dma_device_type": 2 00:10:17.606 }, 00:10:17.606 { 00:10:17.606 "dma_device_id": "system", 00:10:17.606 "dma_device_type": 1 00:10:17.606 }, 00:10:17.606 { 00:10:17.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.606 "dma_device_type": 2 00:10:17.606 }, 00:10:17.606 { 00:10:17.606 "dma_device_id": "system", 00:10:17.606 "dma_device_type": 1 00:10:17.606 }, 00:10:17.606 { 00:10:17.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.606 "dma_device_type": 2 00:10:17.606 } 00:10:17.606 ], 00:10:17.606 "driver_specific": { 00:10:17.606 "raid": { 00:10:17.606 "uuid": "af2df7a5-53d2-4c55-89fb-a4b0b64739d4", 00:10:17.606 "strip_size_kb": 64, 00:10:17.606 "state": "online", 00:10:17.606 "raid_level": "raid0", 00:10:17.606 "superblock": false, 00:10:17.606 "num_base_bdevs": 4, 00:10:17.606 "num_base_bdevs_discovered": 4, 00:10:17.606 "num_base_bdevs_operational": 4, 00:10:17.606 "base_bdevs_list": [ 00:10:17.606 { 00:10:17.606 "name": "BaseBdev1", 00:10:17.606 "uuid": "fddabe11-e5a5-49f2-a002-5a8639161f66", 00:10:17.606 "is_configured": true, 00:10:17.606 "data_offset": 0, 00:10:17.606 "data_size": 65536 00:10:17.606 }, 00:10:17.606 { 00:10:17.606 "name": "BaseBdev2", 00:10:17.606 "uuid": "e83bc882-0f0b-44ca-a0cf-b648473733ae", 00:10:17.606 "is_configured": true, 00:10:17.606 "data_offset": 0, 00:10:17.606 "data_size": 65536 00:10:17.606 }, 00:10:17.606 { 00:10:17.606 "name": "BaseBdev3", 00:10:17.606 "uuid": "d5ab75e7-8a8d-4f84-90b3-33196a4a4a49", 00:10:17.606 "is_configured": true, 00:10:17.606 "data_offset": 0, 00:10:17.606 "data_size": 65536 00:10:17.606 }, 00:10:17.606 { 00:10:17.606 "name": "BaseBdev4", 00:10:17.606 "uuid": "e3f3df2a-a50d-40ab-b61f-3584c930efae", 00:10:17.606 "is_configured": true, 00:10:17.606 "data_offset": 0, 00:10:17.606 "data_size": 65536 00:10:17.606 } 00:10:17.606 ] 00:10:17.606 } 00:10:17.606 } 00:10:17.606 }' 00:10:17.606 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:17.606 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:17.606 BaseBdev2 00:10:17.606 BaseBdev3 00:10:17.606 BaseBdev4' 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.873 09:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.873 [2024-10-15 09:09:35.754959] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:17.873 [2024-10-15 09:09:35.755119] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.873 [2024-10-15 09:09:35.755213] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:18.143 09:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.143 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:18.143 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:18.143 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:18.143 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:18.143 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:18.143 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:18.143 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.143 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:18.143 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.143 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.143 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.143 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.143 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.143 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.143 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.143 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.143 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.143 09:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.143 09:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.143 09:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.143 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.143 "name": "Existed_Raid", 00:10:18.143 "uuid": "af2df7a5-53d2-4c55-89fb-a4b0b64739d4", 00:10:18.143 "strip_size_kb": 64, 00:10:18.143 "state": "offline", 00:10:18.143 "raid_level": "raid0", 00:10:18.143 "superblock": false, 00:10:18.143 "num_base_bdevs": 4, 00:10:18.143 "num_base_bdevs_discovered": 3, 00:10:18.143 "num_base_bdevs_operational": 3, 00:10:18.143 "base_bdevs_list": [ 00:10:18.143 { 00:10:18.143 "name": null, 00:10:18.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.143 "is_configured": false, 00:10:18.143 "data_offset": 0, 00:10:18.143 "data_size": 65536 00:10:18.143 }, 00:10:18.143 { 00:10:18.143 "name": "BaseBdev2", 00:10:18.143 "uuid": "e83bc882-0f0b-44ca-a0cf-b648473733ae", 00:10:18.143 "is_configured": true, 00:10:18.143 "data_offset": 0, 00:10:18.143 "data_size": 65536 00:10:18.143 }, 00:10:18.143 { 00:10:18.143 "name": "BaseBdev3", 00:10:18.143 "uuid": "d5ab75e7-8a8d-4f84-90b3-33196a4a4a49", 00:10:18.143 "is_configured": true, 00:10:18.143 "data_offset": 0, 00:10:18.143 "data_size": 65536 00:10:18.143 }, 00:10:18.143 { 00:10:18.144 "name": "BaseBdev4", 00:10:18.144 "uuid": "e3f3df2a-a50d-40ab-b61f-3584c930efae", 00:10:18.144 "is_configured": true, 00:10:18.144 "data_offset": 0, 00:10:18.144 "data_size": 65536 00:10:18.144 } 00:10:18.144 ] 00:10:18.144 }' 00:10:18.144 09:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.144 09:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.401 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:18.401 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:18.401 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.401 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:18.401 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.401 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.659 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.659 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:18.659 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:18.659 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:18.659 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.659 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.659 [2024-10-15 09:09:36.339866] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:18.659 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.659 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:18.659 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:18.659 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:18.659 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.659 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.659 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.659 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.659 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:18.659 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:18.659 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:18.659 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.659 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.659 [2024-10-15 09:09:36.480414] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:18.917 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.917 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:18.917 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:18.917 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.917 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.917 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.917 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:18.917 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.917 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:18.917 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:18.917 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:18.917 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.917 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.917 [2024-10-15 09:09:36.649359] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:18.917 [2024-10-15 09:09:36.649527] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:18.917 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.917 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:18.917 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:18.917 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.917 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:18.917 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.917 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.917 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.917 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:18.917 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:18.917 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:18.917 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:18.917 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:18.917 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:18.917 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.917 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.176 BaseBdev2 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.176 [ 00:10:19.176 { 00:10:19.176 "name": "BaseBdev2", 00:10:19.176 "aliases": [ 00:10:19.176 "b3460c2f-521f-45e0-84b4-d96b738b3e5e" 00:10:19.176 ], 00:10:19.176 "product_name": "Malloc disk", 00:10:19.176 "block_size": 512, 00:10:19.176 "num_blocks": 65536, 00:10:19.176 "uuid": "b3460c2f-521f-45e0-84b4-d96b738b3e5e", 00:10:19.176 "assigned_rate_limits": { 00:10:19.176 "rw_ios_per_sec": 0, 00:10:19.176 "rw_mbytes_per_sec": 0, 00:10:19.176 "r_mbytes_per_sec": 0, 00:10:19.176 "w_mbytes_per_sec": 0 00:10:19.176 }, 00:10:19.176 "claimed": false, 00:10:19.176 "zoned": false, 00:10:19.176 "supported_io_types": { 00:10:19.176 "read": true, 00:10:19.176 "write": true, 00:10:19.176 "unmap": true, 00:10:19.176 "flush": true, 00:10:19.176 "reset": true, 00:10:19.176 "nvme_admin": false, 00:10:19.176 "nvme_io": false, 00:10:19.176 "nvme_io_md": false, 00:10:19.176 "write_zeroes": true, 00:10:19.176 "zcopy": true, 00:10:19.176 "get_zone_info": false, 00:10:19.176 "zone_management": false, 00:10:19.176 "zone_append": false, 00:10:19.176 "compare": false, 00:10:19.176 "compare_and_write": false, 00:10:19.176 "abort": true, 00:10:19.176 "seek_hole": false, 00:10:19.176 "seek_data": false, 00:10:19.176 "copy": true, 00:10:19.176 "nvme_iov_md": false 00:10:19.176 }, 00:10:19.176 "memory_domains": [ 00:10:19.176 { 00:10:19.176 "dma_device_id": "system", 00:10:19.176 "dma_device_type": 1 00:10:19.176 }, 00:10:19.176 { 00:10:19.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.176 "dma_device_type": 2 00:10:19.176 } 00:10:19.176 ], 00:10:19.176 "driver_specific": {} 00:10:19.176 } 00:10:19.176 ] 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.176 BaseBdev3 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.176 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.176 [ 00:10:19.176 { 00:10:19.176 "name": "BaseBdev3", 00:10:19.176 "aliases": [ 00:10:19.176 "001710f4-ac91-4fef-b6c8-fd206f1b4914" 00:10:19.176 ], 00:10:19.176 "product_name": "Malloc disk", 00:10:19.176 "block_size": 512, 00:10:19.176 "num_blocks": 65536, 00:10:19.176 "uuid": "001710f4-ac91-4fef-b6c8-fd206f1b4914", 00:10:19.176 "assigned_rate_limits": { 00:10:19.176 "rw_ios_per_sec": 0, 00:10:19.176 "rw_mbytes_per_sec": 0, 00:10:19.177 "r_mbytes_per_sec": 0, 00:10:19.177 "w_mbytes_per_sec": 0 00:10:19.177 }, 00:10:19.177 "claimed": false, 00:10:19.177 "zoned": false, 00:10:19.177 "supported_io_types": { 00:10:19.177 "read": true, 00:10:19.177 "write": true, 00:10:19.177 "unmap": true, 00:10:19.177 "flush": true, 00:10:19.177 "reset": true, 00:10:19.177 "nvme_admin": false, 00:10:19.177 "nvme_io": false, 00:10:19.177 "nvme_io_md": false, 00:10:19.177 "write_zeroes": true, 00:10:19.177 "zcopy": true, 00:10:19.177 "get_zone_info": false, 00:10:19.177 "zone_management": false, 00:10:19.177 "zone_append": false, 00:10:19.177 "compare": false, 00:10:19.177 "compare_and_write": false, 00:10:19.177 "abort": true, 00:10:19.177 "seek_hole": false, 00:10:19.177 "seek_data": false, 00:10:19.177 "copy": true, 00:10:19.177 "nvme_iov_md": false 00:10:19.177 }, 00:10:19.177 "memory_domains": [ 00:10:19.177 { 00:10:19.177 "dma_device_id": "system", 00:10:19.177 "dma_device_type": 1 00:10:19.177 }, 00:10:19.177 { 00:10:19.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.177 "dma_device_type": 2 00:10:19.177 } 00:10:19.177 ], 00:10:19.177 "driver_specific": {} 00:10:19.177 } 00:10:19.177 ] 00:10:19.177 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.177 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:19.177 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:19.177 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:19.177 09:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:19.177 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.177 09:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.177 BaseBdev4 00:10:19.177 09:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.177 09:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:19.177 09:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:19.177 09:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:19.177 09:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:19.177 09:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:19.177 09:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:19.177 09:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:19.177 09:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.177 09:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.177 09:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.177 09:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:19.177 09:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.177 09:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.177 [ 00:10:19.177 { 00:10:19.177 "name": "BaseBdev4", 00:10:19.177 "aliases": [ 00:10:19.177 "d12a64ad-cc29-426f-a38f-893e6cd8a5c2" 00:10:19.177 ], 00:10:19.177 "product_name": "Malloc disk", 00:10:19.177 "block_size": 512, 00:10:19.177 "num_blocks": 65536, 00:10:19.177 "uuid": "d12a64ad-cc29-426f-a38f-893e6cd8a5c2", 00:10:19.177 "assigned_rate_limits": { 00:10:19.177 "rw_ios_per_sec": 0, 00:10:19.177 "rw_mbytes_per_sec": 0, 00:10:19.177 "r_mbytes_per_sec": 0, 00:10:19.177 "w_mbytes_per_sec": 0 00:10:19.177 }, 00:10:19.177 "claimed": false, 00:10:19.177 "zoned": false, 00:10:19.177 "supported_io_types": { 00:10:19.177 "read": true, 00:10:19.177 "write": true, 00:10:19.177 "unmap": true, 00:10:19.177 "flush": true, 00:10:19.177 "reset": true, 00:10:19.177 "nvme_admin": false, 00:10:19.177 "nvme_io": false, 00:10:19.177 "nvme_io_md": false, 00:10:19.177 "write_zeroes": true, 00:10:19.177 "zcopy": true, 00:10:19.177 "get_zone_info": false, 00:10:19.177 "zone_management": false, 00:10:19.177 "zone_append": false, 00:10:19.177 "compare": false, 00:10:19.177 "compare_and_write": false, 00:10:19.177 "abort": true, 00:10:19.177 "seek_hole": false, 00:10:19.177 "seek_data": false, 00:10:19.177 "copy": true, 00:10:19.177 "nvme_iov_md": false 00:10:19.177 }, 00:10:19.177 "memory_domains": [ 00:10:19.177 { 00:10:19.177 "dma_device_id": "system", 00:10:19.177 "dma_device_type": 1 00:10:19.177 }, 00:10:19.177 { 00:10:19.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.177 "dma_device_type": 2 00:10:19.177 } 00:10:19.177 ], 00:10:19.177 "driver_specific": {} 00:10:19.177 } 00:10:19.177 ] 00:10:19.177 09:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.177 09:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:19.177 09:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:19.177 09:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:19.177 09:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:19.177 09:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.177 09:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.436 [2024-10-15 09:09:37.073972] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:19.436 [2024-10-15 09:09:37.074141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:19.436 [2024-10-15 09:09:37.074200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:19.436 [2024-10-15 09:09:37.076438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:19.436 [2024-10-15 09:09:37.076555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:19.436 09:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.436 09:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:19.436 09:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.436 09:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.436 09:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.436 09:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.436 09:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.436 09:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.436 09:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.436 09:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.436 09:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.436 09:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.436 09:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.436 09:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.436 09:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.436 09:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.436 09:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.436 "name": "Existed_Raid", 00:10:19.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.436 "strip_size_kb": 64, 00:10:19.436 "state": "configuring", 00:10:19.436 "raid_level": "raid0", 00:10:19.436 "superblock": false, 00:10:19.436 "num_base_bdevs": 4, 00:10:19.436 "num_base_bdevs_discovered": 3, 00:10:19.436 "num_base_bdevs_operational": 4, 00:10:19.436 "base_bdevs_list": [ 00:10:19.436 { 00:10:19.436 "name": "BaseBdev1", 00:10:19.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.436 "is_configured": false, 00:10:19.436 "data_offset": 0, 00:10:19.436 "data_size": 0 00:10:19.436 }, 00:10:19.436 { 00:10:19.436 "name": "BaseBdev2", 00:10:19.436 "uuid": "b3460c2f-521f-45e0-84b4-d96b738b3e5e", 00:10:19.436 "is_configured": true, 00:10:19.436 "data_offset": 0, 00:10:19.436 "data_size": 65536 00:10:19.436 }, 00:10:19.436 { 00:10:19.436 "name": "BaseBdev3", 00:10:19.436 "uuid": "001710f4-ac91-4fef-b6c8-fd206f1b4914", 00:10:19.436 "is_configured": true, 00:10:19.436 "data_offset": 0, 00:10:19.436 "data_size": 65536 00:10:19.436 }, 00:10:19.436 { 00:10:19.436 "name": "BaseBdev4", 00:10:19.436 "uuid": "d12a64ad-cc29-426f-a38f-893e6cd8a5c2", 00:10:19.436 "is_configured": true, 00:10:19.436 "data_offset": 0, 00:10:19.436 "data_size": 65536 00:10:19.436 } 00:10:19.436 ] 00:10:19.436 }' 00:10:19.436 09:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.436 09:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.694 09:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:19.694 09:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.694 09:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.694 [2024-10-15 09:09:37.573141] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:19.694 09:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.694 09:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:19.694 09:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.694 09:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.694 09:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.695 09:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.695 09:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.695 09:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.695 09:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.695 09:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.695 09:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.695 09:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.695 09:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.695 09:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.695 09:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.953 09:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.953 09:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.953 "name": "Existed_Raid", 00:10:19.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.953 "strip_size_kb": 64, 00:10:19.953 "state": "configuring", 00:10:19.953 "raid_level": "raid0", 00:10:19.953 "superblock": false, 00:10:19.953 "num_base_bdevs": 4, 00:10:19.953 "num_base_bdevs_discovered": 2, 00:10:19.953 "num_base_bdevs_operational": 4, 00:10:19.953 "base_bdevs_list": [ 00:10:19.953 { 00:10:19.953 "name": "BaseBdev1", 00:10:19.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.953 "is_configured": false, 00:10:19.953 "data_offset": 0, 00:10:19.953 "data_size": 0 00:10:19.953 }, 00:10:19.953 { 00:10:19.953 "name": null, 00:10:19.953 "uuid": "b3460c2f-521f-45e0-84b4-d96b738b3e5e", 00:10:19.953 "is_configured": false, 00:10:19.953 "data_offset": 0, 00:10:19.953 "data_size": 65536 00:10:19.953 }, 00:10:19.953 { 00:10:19.953 "name": "BaseBdev3", 00:10:19.953 "uuid": "001710f4-ac91-4fef-b6c8-fd206f1b4914", 00:10:19.953 "is_configured": true, 00:10:19.953 "data_offset": 0, 00:10:19.953 "data_size": 65536 00:10:19.953 }, 00:10:19.953 { 00:10:19.953 "name": "BaseBdev4", 00:10:19.953 "uuid": "d12a64ad-cc29-426f-a38f-893e6cd8a5c2", 00:10:19.953 "is_configured": true, 00:10:19.953 "data_offset": 0, 00:10:19.953 "data_size": 65536 00:10:19.953 } 00:10:19.953 ] 00:10:19.953 }' 00:10:19.953 09:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.953 09:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.211 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.211 09:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.211 09:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.212 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:20.212 09:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.212 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:20.212 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:20.212 09:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.212 09:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.471 [2024-10-15 09:09:38.135912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:20.471 BaseBdev1 00:10:20.471 09:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.471 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:20.471 09:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:20.471 09:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:20.471 09:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:20.471 09:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:20.471 09:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:20.471 09:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:20.471 09:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.471 09:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.471 09:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.471 09:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:20.471 09:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.471 09:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.471 [ 00:10:20.471 { 00:10:20.471 "name": "BaseBdev1", 00:10:20.471 "aliases": [ 00:10:20.471 "c340ccc9-b6af-4f63-a045-dc91fca993e0" 00:10:20.471 ], 00:10:20.471 "product_name": "Malloc disk", 00:10:20.471 "block_size": 512, 00:10:20.471 "num_blocks": 65536, 00:10:20.471 "uuid": "c340ccc9-b6af-4f63-a045-dc91fca993e0", 00:10:20.471 "assigned_rate_limits": { 00:10:20.471 "rw_ios_per_sec": 0, 00:10:20.471 "rw_mbytes_per_sec": 0, 00:10:20.471 "r_mbytes_per_sec": 0, 00:10:20.471 "w_mbytes_per_sec": 0 00:10:20.471 }, 00:10:20.471 "claimed": true, 00:10:20.471 "claim_type": "exclusive_write", 00:10:20.471 "zoned": false, 00:10:20.471 "supported_io_types": { 00:10:20.471 "read": true, 00:10:20.471 "write": true, 00:10:20.471 "unmap": true, 00:10:20.471 "flush": true, 00:10:20.471 "reset": true, 00:10:20.471 "nvme_admin": false, 00:10:20.471 "nvme_io": false, 00:10:20.471 "nvme_io_md": false, 00:10:20.471 "write_zeroes": true, 00:10:20.471 "zcopy": true, 00:10:20.471 "get_zone_info": false, 00:10:20.471 "zone_management": false, 00:10:20.471 "zone_append": false, 00:10:20.471 "compare": false, 00:10:20.471 "compare_and_write": false, 00:10:20.471 "abort": true, 00:10:20.471 "seek_hole": false, 00:10:20.471 "seek_data": false, 00:10:20.471 "copy": true, 00:10:20.471 "nvme_iov_md": false 00:10:20.471 }, 00:10:20.471 "memory_domains": [ 00:10:20.471 { 00:10:20.471 "dma_device_id": "system", 00:10:20.471 "dma_device_type": 1 00:10:20.471 }, 00:10:20.471 { 00:10:20.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.471 "dma_device_type": 2 00:10:20.471 } 00:10:20.471 ], 00:10:20.471 "driver_specific": {} 00:10:20.471 } 00:10:20.471 ] 00:10:20.471 09:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.471 09:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:20.471 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:20.471 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.471 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.471 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.471 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.471 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.471 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.471 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.472 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.472 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.472 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.472 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.472 09:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.472 09:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.472 09:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.472 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.472 "name": "Existed_Raid", 00:10:20.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.472 "strip_size_kb": 64, 00:10:20.472 "state": "configuring", 00:10:20.472 "raid_level": "raid0", 00:10:20.472 "superblock": false, 00:10:20.472 "num_base_bdevs": 4, 00:10:20.472 "num_base_bdevs_discovered": 3, 00:10:20.472 "num_base_bdevs_operational": 4, 00:10:20.472 "base_bdevs_list": [ 00:10:20.472 { 00:10:20.472 "name": "BaseBdev1", 00:10:20.472 "uuid": "c340ccc9-b6af-4f63-a045-dc91fca993e0", 00:10:20.472 "is_configured": true, 00:10:20.472 "data_offset": 0, 00:10:20.472 "data_size": 65536 00:10:20.472 }, 00:10:20.472 { 00:10:20.472 "name": null, 00:10:20.472 "uuid": "b3460c2f-521f-45e0-84b4-d96b738b3e5e", 00:10:20.472 "is_configured": false, 00:10:20.472 "data_offset": 0, 00:10:20.472 "data_size": 65536 00:10:20.472 }, 00:10:20.472 { 00:10:20.472 "name": "BaseBdev3", 00:10:20.472 "uuid": "001710f4-ac91-4fef-b6c8-fd206f1b4914", 00:10:20.472 "is_configured": true, 00:10:20.472 "data_offset": 0, 00:10:20.472 "data_size": 65536 00:10:20.472 }, 00:10:20.472 { 00:10:20.472 "name": "BaseBdev4", 00:10:20.472 "uuid": "d12a64ad-cc29-426f-a38f-893e6cd8a5c2", 00:10:20.472 "is_configured": true, 00:10:20.472 "data_offset": 0, 00:10:20.472 "data_size": 65536 00:10:20.472 } 00:10:20.472 ] 00:10:20.472 }' 00:10:20.472 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.472 09:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.038 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.038 09:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.038 09:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.038 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:21.038 09:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.038 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:21.038 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:21.038 09:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.038 09:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.038 [2024-10-15 09:09:38.719069] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:21.038 09:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.038 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:21.038 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.038 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.038 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.038 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.038 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.038 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.038 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.038 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.038 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.038 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.038 09:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.038 09:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.038 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.038 09:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.038 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.038 "name": "Existed_Raid", 00:10:21.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.038 "strip_size_kb": 64, 00:10:21.038 "state": "configuring", 00:10:21.038 "raid_level": "raid0", 00:10:21.038 "superblock": false, 00:10:21.038 "num_base_bdevs": 4, 00:10:21.038 "num_base_bdevs_discovered": 2, 00:10:21.038 "num_base_bdevs_operational": 4, 00:10:21.038 "base_bdevs_list": [ 00:10:21.038 { 00:10:21.038 "name": "BaseBdev1", 00:10:21.038 "uuid": "c340ccc9-b6af-4f63-a045-dc91fca993e0", 00:10:21.038 "is_configured": true, 00:10:21.038 "data_offset": 0, 00:10:21.038 "data_size": 65536 00:10:21.038 }, 00:10:21.038 { 00:10:21.038 "name": null, 00:10:21.038 "uuid": "b3460c2f-521f-45e0-84b4-d96b738b3e5e", 00:10:21.038 "is_configured": false, 00:10:21.038 "data_offset": 0, 00:10:21.038 "data_size": 65536 00:10:21.038 }, 00:10:21.038 { 00:10:21.038 "name": null, 00:10:21.038 "uuid": "001710f4-ac91-4fef-b6c8-fd206f1b4914", 00:10:21.038 "is_configured": false, 00:10:21.038 "data_offset": 0, 00:10:21.038 "data_size": 65536 00:10:21.038 }, 00:10:21.038 { 00:10:21.038 "name": "BaseBdev4", 00:10:21.038 "uuid": "d12a64ad-cc29-426f-a38f-893e6cd8a5c2", 00:10:21.038 "is_configured": true, 00:10:21.038 "data_offset": 0, 00:10:21.038 "data_size": 65536 00:10:21.038 } 00:10:21.038 ] 00:10:21.038 }' 00:10:21.038 09:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.038 09:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.605 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.605 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:21.605 09:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.605 09:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.605 09:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.605 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:21.605 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:21.605 09:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.605 09:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.605 [2024-10-15 09:09:39.278164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:21.605 09:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.605 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:21.605 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.605 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.605 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.605 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.605 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.605 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.605 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.605 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.605 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.605 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.605 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.605 09:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.605 09:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.605 09:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.605 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.605 "name": "Existed_Raid", 00:10:21.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.605 "strip_size_kb": 64, 00:10:21.605 "state": "configuring", 00:10:21.605 "raid_level": "raid0", 00:10:21.605 "superblock": false, 00:10:21.605 "num_base_bdevs": 4, 00:10:21.605 "num_base_bdevs_discovered": 3, 00:10:21.605 "num_base_bdevs_operational": 4, 00:10:21.605 "base_bdevs_list": [ 00:10:21.605 { 00:10:21.605 "name": "BaseBdev1", 00:10:21.605 "uuid": "c340ccc9-b6af-4f63-a045-dc91fca993e0", 00:10:21.605 "is_configured": true, 00:10:21.605 "data_offset": 0, 00:10:21.605 "data_size": 65536 00:10:21.605 }, 00:10:21.605 { 00:10:21.605 "name": null, 00:10:21.605 "uuid": "b3460c2f-521f-45e0-84b4-d96b738b3e5e", 00:10:21.605 "is_configured": false, 00:10:21.605 "data_offset": 0, 00:10:21.605 "data_size": 65536 00:10:21.605 }, 00:10:21.605 { 00:10:21.605 "name": "BaseBdev3", 00:10:21.605 "uuid": "001710f4-ac91-4fef-b6c8-fd206f1b4914", 00:10:21.605 "is_configured": true, 00:10:21.605 "data_offset": 0, 00:10:21.605 "data_size": 65536 00:10:21.605 }, 00:10:21.605 { 00:10:21.605 "name": "BaseBdev4", 00:10:21.605 "uuid": "d12a64ad-cc29-426f-a38f-893e6cd8a5c2", 00:10:21.605 "is_configured": true, 00:10:21.605 "data_offset": 0, 00:10:21.605 "data_size": 65536 00:10:21.605 } 00:10:21.605 ] 00:10:21.605 }' 00:10:21.605 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.605 09:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.171 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.171 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:22.171 09:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.171 09:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.171 09:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.171 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:22.171 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:22.171 09:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.171 09:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.171 [2024-10-15 09:09:39.813307] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:22.171 09:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.171 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:22.171 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.171 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.171 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.171 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.171 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.171 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.171 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.171 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.171 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.171 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.171 09:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.171 09:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.171 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.171 09:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.171 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.171 "name": "Existed_Raid", 00:10:22.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.171 "strip_size_kb": 64, 00:10:22.171 "state": "configuring", 00:10:22.171 "raid_level": "raid0", 00:10:22.171 "superblock": false, 00:10:22.171 "num_base_bdevs": 4, 00:10:22.171 "num_base_bdevs_discovered": 2, 00:10:22.171 "num_base_bdevs_operational": 4, 00:10:22.171 "base_bdevs_list": [ 00:10:22.171 { 00:10:22.171 "name": null, 00:10:22.171 "uuid": "c340ccc9-b6af-4f63-a045-dc91fca993e0", 00:10:22.171 "is_configured": false, 00:10:22.171 "data_offset": 0, 00:10:22.171 "data_size": 65536 00:10:22.171 }, 00:10:22.171 { 00:10:22.171 "name": null, 00:10:22.171 "uuid": "b3460c2f-521f-45e0-84b4-d96b738b3e5e", 00:10:22.171 "is_configured": false, 00:10:22.171 "data_offset": 0, 00:10:22.171 "data_size": 65536 00:10:22.171 }, 00:10:22.171 { 00:10:22.171 "name": "BaseBdev3", 00:10:22.171 "uuid": "001710f4-ac91-4fef-b6c8-fd206f1b4914", 00:10:22.171 "is_configured": true, 00:10:22.171 "data_offset": 0, 00:10:22.171 "data_size": 65536 00:10:22.171 }, 00:10:22.171 { 00:10:22.171 "name": "BaseBdev4", 00:10:22.171 "uuid": "d12a64ad-cc29-426f-a38f-893e6cd8a5c2", 00:10:22.171 "is_configured": true, 00:10:22.171 "data_offset": 0, 00:10:22.172 "data_size": 65536 00:10:22.172 } 00:10:22.172 ] 00:10:22.172 }' 00:10:22.172 09:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.172 09:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.739 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.739 09:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.739 09:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.739 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:22.739 09:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.739 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:22.739 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:22.739 09:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.739 09:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.739 [2024-10-15 09:09:40.411585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:22.739 09:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.739 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:22.739 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.739 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.739 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.739 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.739 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.739 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.739 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.739 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.739 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.739 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.739 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.739 09:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.739 09:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.739 09:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.739 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.739 "name": "Existed_Raid", 00:10:22.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.739 "strip_size_kb": 64, 00:10:22.739 "state": "configuring", 00:10:22.739 "raid_level": "raid0", 00:10:22.739 "superblock": false, 00:10:22.739 "num_base_bdevs": 4, 00:10:22.739 "num_base_bdevs_discovered": 3, 00:10:22.739 "num_base_bdevs_operational": 4, 00:10:22.739 "base_bdevs_list": [ 00:10:22.739 { 00:10:22.739 "name": null, 00:10:22.739 "uuid": "c340ccc9-b6af-4f63-a045-dc91fca993e0", 00:10:22.739 "is_configured": false, 00:10:22.739 "data_offset": 0, 00:10:22.739 "data_size": 65536 00:10:22.739 }, 00:10:22.739 { 00:10:22.739 "name": "BaseBdev2", 00:10:22.739 "uuid": "b3460c2f-521f-45e0-84b4-d96b738b3e5e", 00:10:22.739 "is_configured": true, 00:10:22.739 "data_offset": 0, 00:10:22.739 "data_size": 65536 00:10:22.739 }, 00:10:22.739 { 00:10:22.739 "name": "BaseBdev3", 00:10:22.739 "uuid": "001710f4-ac91-4fef-b6c8-fd206f1b4914", 00:10:22.739 "is_configured": true, 00:10:22.739 "data_offset": 0, 00:10:22.739 "data_size": 65536 00:10:22.739 }, 00:10:22.739 { 00:10:22.739 "name": "BaseBdev4", 00:10:22.739 "uuid": "d12a64ad-cc29-426f-a38f-893e6cd8a5c2", 00:10:22.739 "is_configured": true, 00:10:22.739 "data_offset": 0, 00:10:22.739 "data_size": 65536 00:10:22.739 } 00:10:22.739 ] 00:10:22.739 }' 00:10:22.739 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.739 09:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.998 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.998 09:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.998 09:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.998 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:22.998 09:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.256 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:23.256 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.256 09:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.256 09:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.256 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:23.256 09:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.256 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c340ccc9-b6af-4f63-a045-dc91fca993e0 00:10:23.256 09:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.256 09:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.256 [2024-10-15 09:09:41.017217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:23.257 [2024-10-15 09:09:41.017383] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:23.257 [2024-10-15 09:09:41.017395] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:23.257 [2024-10-15 09:09:41.017673] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:23.257 [2024-10-15 09:09:41.017864] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:23.257 [2024-10-15 09:09:41.017878] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:23.257 [2024-10-15 09:09:41.018133] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:23.257 NewBaseBdev 00:10:23.257 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.257 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:23.257 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:23.257 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:23.257 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:23.257 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:23.257 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:23.257 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:23.257 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.257 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.257 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.257 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:23.257 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.257 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.257 [ 00:10:23.257 { 00:10:23.257 "name": "NewBaseBdev", 00:10:23.257 "aliases": [ 00:10:23.257 "c340ccc9-b6af-4f63-a045-dc91fca993e0" 00:10:23.257 ], 00:10:23.257 "product_name": "Malloc disk", 00:10:23.257 "block_size": 512, 00:10:23.257 "num_blocks": 65536, 00:10:23.257 "uuid": "c340ccc9-b6af-4f63-a045-dc91fca993e0", 00:10:23.257 "assigned_rate_limits": { 00:10:23.257 "rw_ios_per_sec": 0, 00:10:23.257 "rw_mbytes_per_sec": 0, 00:10:23.257 "r_mbytes_per_sec": 0, 00:10:23.257 "w_mbytes_per_sec": 0 00:10:23.257 }, 00:10:23.257 "claimed": true, 00:10:23.257 "claim_type": "exclusive_write", 00:10:23.257 "zoned": false, 00:10:23.257 "supported_io_types": { 00:10:23.257 "read": true, 00:10:23.257 "write": true, 00:10:23.257 "unmap": true, 00:10:23.257 "flush": true, 00:10:23.257 "reset": true, 00:10:23.257 "nvme_admin": false, 00:10:23.257 "nvme_io": false, 00:10:23.257 "nvme_io_md": false, 00:10:23.257 "write_zeroes": true, 00:10:23.257 "zcopy": true, 00:10:23.257 "get_zone_info": false, 00:10:23.257 "zone_management": false, 00:10:23.257 "zone_append": false, 00:10:23.257 "compare": false, 00:10:23.257 "compare_and_write": false, 00:10:23.257 "abort": true, 00:10:23.257 "seek_hole": false, 00:10:23.257 "seek_data": false, 00:10:23.257 "copy": true, 00:10:23.257 "nvme_iov_md": false 00:10:23.257 }, 00:10:23.257 "memory_domains": [ 00:10:23.257 { 00:10:23.257 "dma_device_id": "system", 00:10:23.257 "dma_device_type": 1 00:10:23.257 }, 00:10:23.257 { 00:10:23.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.257 "dma_device_type": 2 00:10:23.257 } 00:10:23.257 ], 00:10:23.257 "driver_specific": {} 00:10:23.257 } 00:10:23.257 ] 00:10:23.257 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.257 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:23.257 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:23.257 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.257 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:23.257 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:23.257 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.257 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.257 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.257 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.257 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.257 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.257 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.257 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.257 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.257 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.257 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.257 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.257 "name": "Existed_Raid", 00:10:23.257 "uuid": "15130ec8-5524-4a64-89d2-b7f184aa2519", 00:10:23.257 "strip_size_kb": 64, 00:10:23.257 "state": "online", 00:10:23.257 "raid_level": "raid0", 00:10:23.257 "superblock": false, 00:10:23.257 "num_base_bdevs": 4, 00:10:23.257 "num_base_bdevs_discovered": 4, 00:10:23.257 "num_base_bdevs_operational": 4, 00:10:23.257 "base_bdevs_list": [ 00:10:23.257 { 00:10:23.257 "name": "NewBaseBdev", 00:10:23.257 "uuid": "c340ccc9-b6af-4f63-a045-dc91fca993e0", 00:10:23.257 "is_configured": true, 00:10:23.257 "data_offset": 0, 00:10:23.257 "data_size": 65536 00:10:23.257 }, 00:10:23.257 { 00:10:23.257 "name": "BaseBdev2", 00:10:23.257 "uuid": "b3460c2f-521f-45e0-84b4-d96b738b3e5e", 00:10:23.257 "is_configured": true, 00:10:23.257 "data_offset": 0, 00:10:23.257 "data_size": 65536 00:10:23.257 }, 00:10:23.257 { 00:10:23.257 "name": "BaseBdev3", 00:10:23.257 "uuid": "001710f4-ac91-4fef-b6c8-fd206f1b4914", 00:10:23.257 "is_configured": true, 00:10:23.257 "data_offset": 0, 00:10:23.257 "data_size": 65536 00:10:23.257 }, 00:10:23.257 { 00:10:23.257 "name": "BaseBdev4", 00:10:23.257 "uuid": "d12a64ad-cc29-426f-a38f-893e6cd8a5c2", 00:10:23.257 "is_configured": true, 00:10:23.257 "data_offset": 0, 00:10:23.257 "data_size": 65536 00:10:23.257 } 00:10:23.257 ] 00:10:23.257 }' 00:10:23.257 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.257 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.823 [2024-10-15 09:09:41.476904] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:23.823 "name": "Existed_Raid", 00:10:23.823 "aliases": [ 00:10:23.823 "15130ec8-5524-4a64-89d2-b7f184aa2519" 00:10:23.823 ], 00:10:23.823 "product_name": "Raid Volume", 00:10:23.823 "block_size": 512, 00:10:23.823 "num_blocks": 262144, 00:10:23.823 "uuid": "15130ec8-5524-4a64-89d2-b7f184aa2519", 00:10:23.823 "assigned_rate_limits": { 00:10:23.823 "rw_ios_per_sec": 0, 00:10:23.823 "rw_mbytes_per_sec": 0, 00:10:23.823 "r_mbytes_per_sec": 0, 00:10:23.823 "w_mbytes_per_sec": 0 00:10:23.823 }, 00:10:23.823 "claimed": false, 00:10:23.823 "zoned": false, 00:10:23.823 "supported_io_types": { 00:10:23.823 "read": true, 00:10:23.823 "write": true, 00:10:23.823 "unmap": true, 00:10:23.823 "flush": true, 00:10:23.823 "reset": true, 00:10:23.823 "nvme_admin": false, 00:10:23.823 "nvme_io": false, 00:10:23.823 "nvme_io_md": false, 00:10:23.823 "write_zeroes": true, 00:10:23.823 "zcopy": false, 00:10:23.823 "get_zone_info": false, 00:10:23.823 "zone_management": false, 00:10:23.823 "zone_append": false, 00:10:23.823 "compare": false, 00:10:23.823 "compare_and_write": false, 00:10:23.823 "abort": false, 00:10:23.823 "seek_hole": false, 00:10:23.823 "seek_data": false, 00:10:23.823 "copy": false, 00:10:23.823 "nvme_iov_md": false 00:10:23.823 }, 00:10:23.823 "memory_domains": [ 00:10:23.823 { 00:10:23.823 "dma_device_id": "system", 00:10:23.823 "dma_device_type": 1 00:10:23.823 }, 00:10:23.823 { 00:10:23.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.823 "dma_device_type": 2 00:10:23.823 }, 00:10:23.823 { 00:10:23.823 "dma_device_id": "system", 00:10:23.823 "dma_device_type": 1 00:10:23.823 }, 00:10:23.823 { 00:10:23.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.823 "dma_device_type": 2 00:10:23.823 }, 00:10:23.823 { 00:10:23.823 "dma_device_id": "system", 00:10:23.823 "dma_device_type": 1 00:10:23.823 }, 00:10:23.823 { 00:10:23.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.823 "dma_device_type": 2 00:10:23.823 }, 00:10:23.823 { 00:10:23.823 "dma_device_id": "system", 00:10:23.823 "dma_device_type": 1 00:10:23.823 }, 00:10:23.823 { 00:10:23.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.823 "dma_device_type": 2 00:10:23.823 } 00:10:23.823 ], 00:10:23.823 "driver_specific": { 00:10:23.823 "raid": { 00:10:23.823 "uuid": "15130ec8-5524-4a64-89d2-b7f184aa2519", 00:10:23.823 "strip_size_kb": 64, 00:10:23.823 "state": "online", 00:10:23.823 "raid_level": "raid0", 00:10:23.823 "superblock": false, 00:10:23.823 "num_base_bdevs": 4, 00:10:23.823 "num_base_bdevs_discovered": 4, 00:10:23.823 "num_base_bdevs_operational": 4, 00:10:23.823 "base_bdevs_list": [ 00:10:23.823 { 00:10:23.823 "name": "NewBaseBdev", 00:10:23.823 "uuid": "c340ccc9-b6af-4f63-a045-dc91fca993e0", 00:10:23.823 "is_configured": true, 00:10:23.823 "data_offset": 0, 00:10:23.823 "data_size": 65536 00:10:23.823 }, 00:10:23.823 { 00:10:23.823 "name": "BaseBdev2", 00:10:23.823 "uuid": "b3460c2f-521f-45e0-84b4-d96b738b3e5e", 00:10:23.823 "is_configured": true, 00:10:23.823 "data_offset": 0, 00:10:23.823 "data_size": 65536 00:10:23.823 }, 00:10:23.823 { 00:10:23.823 "name": "BaseBdev3", 00:10:23.823 "uuid": "001710f4-ac91-4fef-b6c8-fd206f1b4914", 00:10:23.823 "is_configured": true, 00:10:23.823 "data_offset": 0, 00:10:23.823 "data_size": 65536 00:10:23.823 }, 00:10:23.823 { 00:10:23.823 "name": "BaseBdev4", 00:10:23.823 "uuid": "d12a64ad-cc29-426f-a38f-893e6cd8a5c2", 00:10:23.823 "is_configured": true, 00:10:23.823 "data_offset": 0, 00:10:23.823 "data_size": 65536 00:10:23.823 } 00:10:23.823 ] 00:10:23.823 } 00:10:23.823 } 00:10:23.823 }' 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:23.823 BaseBdev2 00:10:23.823 BaseBdev3 00:10:23.823 BaseBdev4' 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.823 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.083 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.083 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.083 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.083 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.083 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:24.083 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.083 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.083 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.083 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.083 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.083 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.083 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:24.083 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.083 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.083 [2024-10-15 09:09:41.815964] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:24.083 [2024-10-15 09:09:41.816016] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:24.083 [2024-10-15 09:09:41.816114] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:24.083 [2024-10-15 09:09:41.816185] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:24.083 [2024-10-15 09:09:41.816196] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:24.083 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.083 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69469 00:10:24.083 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 69469 ']' 00:10:24.083 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 69469 00:10:24.083 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:24.083 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:24.083 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69469 00:10:24.083 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:24.083 killing process with pid 69469 00:10:24.083 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:24.083 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69469' 00:10:24.083 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 69469 00:10:24.084 [2024-10-15 09:09:41.854994] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:24.084 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 69469 00:10:24.651 [2024-10-15 09:09:42.277931] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:25.587 09:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:25.587 00:10:25.587 real 0m12.071s 00:10:25.587 user 0m18.952s 00:10:25.587 sys 0m2.262s 00:10:25.587 ************************************ 00:10:25.587 END TEST raid_state_function_test 00:10:25.587 ************************************ 00:10:25.587 09:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:25.587 09:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.847 09:09:43 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:25.847 09:09:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:25.847 09:09:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:25.847 09:09:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:25.847 ************************************ 00:10:25.847 START TEST raid_state_function_test_sb 00:10:25.847 ************************************ 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70146 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:25.847 Process raid pid: 70146 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70146' 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70146 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 70146 ']' 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:25.847 09:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.847 [2024-10-15 09:09:43.665522] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:10:25.847 [2024-10-15 09:09:43.665769] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.106 [2024-10-15 09:09:43.819257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.106 [2024-10-15 09:09:43.952046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.366 [2024-10-15 09:09:44.181326] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.366 [2024-10-15 09:09:44.181409] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.627 09:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:26.627 09:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:26.627 09:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:26.627 09:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.627 09:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.627 [2024-10-15 09:09:44.518427] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:26.627 [2024-10-15 09:09:44.518508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:26.627 [2024-10-15 09:09:44.518520] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:26.627 [2024-10-15 09:09:44.518530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:26.627 [2024-10-15 09:09:44.518537] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:26.627 [2024-10-15 09:09:44.518546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:26.627 [2024-10-15 09:09:44.518552] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:26.627 [2024-10-15 09:09:44.518561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:26.887 09:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.887 09:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:26.887 09:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.887 09:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.887 09:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.887 09:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.887 09:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.887 09:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.887 09:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.887 09:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.887 09:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.887 09:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.887 09:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.887 09:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.887 09:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.887 09:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.887 09:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.887 "name": "Existed_Raid", 00:10:26.887 "uuid": "6760fd5c-e4b1-49ed-95e5-5be6404fc6cc", 00:10:26.887 "strip_size_kb": 64, 00:10:26.887 "state": "configuring", 00:10:26.887 "raid_level": "raid0", 00:10:26.887 "superblock": true, 00:10:26.887 "num_base_bdevs": 4, 00:10:26.887 "num_base_bdevs_discovered": 0, 00:10:26.887 "num_base_bdevs_operational": 4, 00:10:26.887 "base_bdevs_list": [ 00:10:26.887 { 00:10:26.887 "name": "BaseBdev1", 00:10:26.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.887 "is_configured": false, 00:10:26.887 "data_offset": 0, 00:10:26.887 "data_size": 0 00:10:26.887 }, 00:10:26.887 { 00:10:26.887 "name": "BaseBdev2", 00:10:26.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.887 "is_configured": false, 00:10:26.887 "data_offset": 0, 00:10:26.887 "data_size": 0 00:10:26.887 }, 00:10:26.887 { 00:10:26.887 "name": "BaseBdev3", 00:10:26.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.887 "is_configured": false, 00:10:26.887 "data_offset": 0, 00:10:26.887 "data_size": 0 00:10:26.887 }, 00:10:26.887 { 00:10:26.887 "name": "BaseBdev4", 00:10:26.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.887 "is_configured": false, 00:10:26.887 "data_offset": 0, 00:10:26.887 "data_size": 0 00:10:26.887 } 00:10:26.887 ] 00:10:26.887 }' 00:10:26.887 09:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.887 09:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.147 09:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:27.147 09:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.147 09:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.147 [2024-10-15 09:09:44.981533] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:27.147 [2024-10-15 09:09:44.981714] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:27.147 09:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.147 09:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:27.147 09:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.147 09:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.147 [2024-10-15 09:09:44.993490] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:27.147 [2024-10-15 09:09:44.993583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:27.147 [2024-10-15 09:09:44.993613] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:27.147 [2024-10-15 09:09:44.993636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:27.147 [2024-10-15 09:09:44.993662] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:27.147 [2024-10-15 09:09:44.993736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:27.147 [2024-10-15 09:09:44.993759] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:27.147 [2024-10-15 09:09:44.993780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:27.147 09:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.147 09:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:27.147 09:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.147 09:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.407 [2024-10-15 09:09:45.043800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:27.407 BaseBdev1 00:10:27.407 09:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.407 09:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:27.407 09:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:27.407 09:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:27.407 09:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:27.407 09:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:27.407 09:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:27.407 09:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:27.407 09:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.407 09:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.407 09:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.407 09:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:27.407 09:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.407 09:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.407 [ 00:10:27.407 { 00:10:27.407 "name": "BaseBdev1", 00:10:27.407 "aliases": [ 00:10:27.407 "5661b689-b523-49e5-9f42-b3c8854df020" 00:10:27.407 ], 00:10:27.407 "product_name": "Malloc disk", 00:10:27.407 "block_size": 512, 00:10:27.407 "num_blocks": 65536, 00:10:27.407 "uuid": "5661b689-b523-49e5-9f42-b3c8854df020", 00:10:27.407 "assigned_rate_limits": { 00:10:27.407 "rw_ios_per_sec": 0, 00:10:27.407 "rw_mbytes_per_sec": 0, 00:10:27.407 "r_mbytes_per_sec": 0, 00:10:27.407 "w_mbytes_per_sec": 0 00:10:27.407 }, 00:10:27.407 "claimed": true, 00:10:27.407 "claim_type": "exclusive_write", 00:10:27.407 "zoned": false, 00:10:27.407 "supported_io_types": { 00:10:27.407 "read": true, 00:10:27.407 "write": true, 00:10:27.407 "unmap": true, 00:10:27.407 "flush": true, 00:10:27.407 "reset": true, 00:10:27.407 "nvme_admin": false, 00:10:27.407 "nvme_io": false, 00:10:27.407 "nvme_io_md": false, 00:10:27.407 "write_zeroes": true, 00:10:27.407 "zcopy": true, 00:10:27.407 "get_zone_info": false, 00:10:27.407 "zone_management": false, 00:10:27.407 "zone_append": false, 00:10:27.407 "compare": false, 00:10:27.407 "compare_and_write": false, 00:10:27.407 "abort": true, 00:10:27.407 "seek_hole": false, 00:10:27.407 "seek_data": false, 00:10:27.407 "copy": true, 00:10:27.407 "nvme_iov_md": false 00:10:27.407 }, 00:10:27.407 "memory_domains": [ 00:10:27.407 { 00:10:27.407 "dma_device_id": "system", 00:10:27.407 "dma_device_type": 1 00:10:27.407 }, 00:10:27.407 { 00:10:27.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.407 "dma_device_type": 2 00:10:27.407 } 00:10:27.407 ], 00:10:27.407 "driver_specific": {} 00:10:27.407 } 00:10:27.407 ] 00:10:27.407 09:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.407 09:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:27.407 09:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:27.407 09:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.407 09:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.407 09:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.407 09:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.407 09:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.407 09:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.408 09:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.408 09:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.408 09:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.408 09:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.408 09:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.408 09:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.408 09:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.408 09:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.408 09:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.408 "name": "Existed_Raid", 00:10:27.408 "uuid": "32cab4e0-733d-4c00-a81c-538640e82f27", 00:10:27.408 "strip_size_kb": 64, 00:10:27.408 "state": "configuring", 00:10:27.408 "raid_level": "raid0", 00:10:27.408 "superblock": true, 00:10:27.408 "num_base_bdevs": 4, 00:10:27.408 "num_base_bdevs_discovered": 1, 00:10:27.408 "num_base_bdevs_operational": 4, 00:10:27.408 "base_bdevs_list": [ 00:10:27.408 { 00:10:27.408 "name": "BaseBdev1", 00:10:27.408 "uuid": "5661b689-b523-49e5-9f42-b3c8854df020", 00:10:27.408 "is_configured": true, 00:10:27.408 "data_offset": 2048, 00:10:27.408 "data_size": 63488 00:10:27.408 }, 00:10:27.408 { 00:10:27.408 "name": "BaseBdev2", 00:10:27.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.408 "is_configured": false, 00:10:27.408 "data_offset": 0, 00:10:27.408 "data_size": 0 00:10:27.408 }, 00:10:27.408 { 00:10:27.408 "name": "BaseBdev3", 00:10:27.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.408 "is_configured": false, 00:10:27.408 "data_offset": 0, 00:10:27.408 "data_size": 0 00:10:27.408 }, 00:10:27.408 { 00:10:27.408 "name": "BaseBdev4", 00:10:27.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.408 "is_configured": false, 00:10:27.408 "data_offset": 0, 00:10:27.408 "data_size": 0 00:10:27.408 } 00:10:27.408 ] 00:10:27.408 }' 00:10:27.408 09:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.408 09:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.668 09:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:27.668 09:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.668 09:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.668 [2024-10-15 09:09:45.507122] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:27.668 [2024-10-15 09:09:45.507277] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:27.668 09:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.668 09:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:27.668 09:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.668 09:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.668 [2024-10-15 09:09:45.519236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:27.668 [2024-10-15 09:09:45.521254] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:27.668 [2024-10-15 09:09:45.521352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:27.668 [2024-10-15 09:09:45.521382] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:27.668 [2024-10-15 09:09:45.521407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:27.668 [2024-10-15 09:09:45.521426] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:27.668 [2024-10-15 09:09:45.521447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:27.669 09:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.669 09:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:27.669 09:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:27.669 09:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:27.669 09:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.669 09:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.669 09:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.669 09:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.669 09:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.669 09:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.669 09:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.669 09:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.669 09:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.669 09:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.669 09:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.669 09:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.669 09:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.669 09:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.929 09:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.929 "name": "Existed_Raid", 00:10:27.929 "uuid": "b13c0ed7-e18b-4ced-9b5c-c10a5f42804b", 00:10:27.929 "strip_size_kb": 64, 00:10:27.929 "state": "configuring", 00:10:27.929 "raid_level": "raid0", 00:10:27.929 "superblock": true, 00:10:27.929 "num_base_bdevs": 4, 00:10:27.929 "num_base_bdevs_discovered": 1, 00:10:27.929 "num_base_bdevs_operational": 4, 00:10:27.929 "base_bdevs_list": [ 00:10:27.929 { 00:10:27.929 "name": "BaseBdev1", 00:10:27.929 "uuid": "5661b689-b523-49e5-9f42-b3c8854df020", 00:10:27.929 "is_configured": true, 00:10:27.929 "data_offset": 2048, 00:10:27.929 "data_size": 63488 00:10:27.929 }, 00:10:27.929 { 00:10:27.929 "name": "BaseBdev2", 00:10:27.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.929 "is_configured": false, 00:10:27.929 "data_offset": 0, 00:10:27.929 "data_size": 0 00:10:27.929 }, 00:10:27.929 { 00:10:27.929 "name": "BaseBdev3", 00:10:27.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.929 "is_configured": false, 00:10:27.929 "data_offset": 0, 00:10:27.929 "data_size": 0 00:10:27.929 }, 00:10:27.929 { 00:10:27.929 "name": "BaseBdev4", 00:10:27.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.929 "is_configured": false, 00:10:27.929 "data_offset": 0, 00:10:27.929 "data_size": 0 00:10:27.929 } 00:10:27.929 ] 00:10:27.929 }' 00:10:27.929 09:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.929 09:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.189 09:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:28.189 09:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.189 09:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.189 [2024-10-15 09:09:46.006746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:28.189 BaseBdev2 00:10:28.189 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.189 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:28.189 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:28.189 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:28.189 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:28.189 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:28.189 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:28.189 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:28.189 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.189 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.189 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.189 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:28.189 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.189 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.189 [ 00:10:28.189 { 00:10:28.189 "name": "BaseBdev2", 00:10:28.189 "aliases": [ 00:10:28.189 "3e5df4c2-94e7-4b5b-89bf-2182cb9b276f" 00:10:28.189 ], 00:10:28.189 "product_name": "Malloc disk", 00:10:28.189 "block_size": 512, 00:10:28.189 "num_blocks": 65536, 00:10:28.189 "uuid": "3e5df4c2-94e7-4b5b-89bf-2182cb9b276f", 00:10:28.189 "assigned_rate_limits": { 00:10:28.189 "rw_ios_per_sec": 0, 00:10:28.189 "rw_mbytes_per_sec": 0, 00:10:28.189 "r_mbytes_per_sec": 0, 00:10:28.189 "w_mbytes_per_sec": 0 00:10:28.189 }, 00:10:28.189 "claimed": true, 00:10:28.189 "claim_type": "exclusive_write", 00:10:28.189 "zoned": false, 00:10:28.189 "supported_io_types": { 00:10:28.189 "read": true, 00:10:28.189 "write": true, 00:10:28.189 "unmap": true, 00:10:28.189 "flush": true, 00:10:28.189 "reset": true, 00:10:28.189 "nvme_admin": false, 00:10:28.189 "nvme_io": false, 00:10:28.189 "nvme_io_md": false, 00:10:28.189 "write_zeroes": true, 00:10:28.189 "zcopy": true, 00:10:28.189 "get_zone_info": false, 00:10:28.189 "zone_management": false, 00:10:28.189 "zone_append": false, 00:10:28.189 "compare": false, 00:10:28.189 "compare_and_write": false, 00:10:28.189 "abort": true, 00:10:28.189 "seek_hole": false, 00:10:28.189 "seek_data": false, 00:10:28.189 "copy": true, 00:10:28.189 "nvme_iov_md": false 00:10:28.189 }, 00:10:28.189 "memory_domains": [ 00:10:28.189 { 00:10:28.189 "dma_device_id": "system", 00:10:28.189 "dma_device_type": 1 00:10:28.189 }, 00:10:28.189 { 00:10:28.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.189 "dma_device_type": 2 00:10:28.189 } 00:10:28.189 ], 00:10:28.189 "driver_specific": {} 00:10:28.189 } 00:10:28.189 ] 00:10:28.189 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.189 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:28.189 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:28.189 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:28.189 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:28.189 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.189 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.189 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:28.189 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.189 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.189 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.189 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.189 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.189 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.189 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.189 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.189 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.189 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.189 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.451 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.451 "name": "Existed_Raid", 00:10:28.451 "uuid": "b13c0ed7-e18b-4ced-9b5c-c10a5f42804b", 00:10:28.451 "strip_size_kb": 64, 00:10:28.451 "state": "configuring", 00:10:28.451 "raid_level": "raid0", 00:10:28.451 "superblock": true, 00:10:28.451 "num_base_bdevs": 4, 00:10:28.451 "num_base_bdevs_discovered": 2, 00:10:28.451 "num_base_bdevs_operational": 4, 00:10:28.451 "base_bdevs_list": [ 00:10:28.451 { 00:10:28.451 "name": "BaseBdev1", 00:10:28.451 "uuid": "5661b689-b523-49e5-9f42-b3c8854df020", 00:10:28.451 "is_configured": true, 00:10:28.451 "data_offset": 2048, 00:10:28.451 "data_size": 63488 00:10:28.451 }, 00:10:28.451 { 00:10:28.451 "name": "BaseBdev2", 00:10:28.451 "uuid": "3e5df4c2-94e7-4b5b-89bf-2182cb9b276f", 00:10:28.451 "is_configured": true, 00:10:28.451 "data_offset": 2048, 00:10:28.451 "data_size": 63488 00:10:28.451 }, 00:10:28.451 { 00:10:28.451 "name": "BaseBdev3", 00:10:28.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.451 "is_configured": false, 00:10:28.451 "data_offset": 0, 00:10:28.451 "data_size": 0 00:10:28.451 }, 00:10:28.451 { 00:10:28.451 "name": "BaseBdev4", 00:10:28.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.451 "is_configured": false, 00:10:28.451 "data_offset": 0, 00:10:28.451 "data_size": 0 00:10:28.451 } 00:10:28.451 ] 00:10:28.451 }' 00:10:28.451 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.451 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.712 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:28.712 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.712 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.712 [2024-10-15 09:09:46.586939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:28.712 BaseBdev3 00:10:28.712 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.712 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:28.712 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:28.712 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:28.712 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:28.712 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:28.712 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:28.712 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:28.712 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.712 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.712 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.712 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:28.712 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.712 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.972 [ 00:10:28.972 { 00:10:28.972 "name": "BaseBdev3", 00:10:28.972 "aliases": [ 00:10:28.972 "28307eb5-517c-4f47-8542-1ec719b7cf48" 00:10:28.972 ], 00:10:28.972 "product_name": "Malloc disk", 00:10:28.972 "block_size": 512, 00:10:28.972 "num_blocks": 65536, 00:10:28.972 "uuid": "28307eb5-517c-4f47-8542-1ec719b7cf48", 00:10:28.972 "assigned_rate_limits": { 00:10:28.972 "rw_ios_per_sec": 0, 00:10:28.972 "rw_mbytes_per_sec": 0, 00:10:28.972 "r_mbytes_per_sec": 0, 00:10:28.972 "w_mbytes_per_sec": 0 00:10:28.972 }, 00:10:28.972 "claimed": true, 00:10:28.972 "claim_type": "exclusive_write", 00:10:28.972 "zoned": false, 00:10:28.972 "supported_io_types": { 00:10:28.972 "read": true, 00:10:28.972 "write": true, 00:10:28.972 "unmap": true, 00:10:28.972 "flush": true, 00:10:28.972 "reset": true, 00:10:28.972 "nvme_admin": false, 00:10:28.972 "nvme_io": false, 00:10:28.972 "nvme_io_md": false, 00:10:28.973 "write_zeroes": true, 00:10:28.973 "zcopy": true, 00:10:28.973 "get_zone_info": false, 00:10:28.973 "zone_management": false, 00:10:28.973 "zone_append": false, 00:10:28.973 "compare": false, 00:10:28.973 "compare_and_write": false, 00:10:28.973 "abort": true, 00:10:28.973 "seek_hole": false, 00:10:28.973 "seek_data": false, 00:10:28.973 "copy": true, 00:10:28.973 "nvme_iov_md": false 00:10:28.973 }, 00:10:28.973 "memory_domains": [ 00:10:28.973 { 00:10:28.973 "dma_device_id": "system", 00:10:28.973 "dma_device_type": 1 00:10:28.973 }, 00:10:28.973 { 00:10:28.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.973 "dma_device_type": 2 00:10:28.973 } 00:10:28.973 ], 00:10:28.973 "driver_specific": {} 00:10:28.973 } 00:10:28.973 ] 00:10:28.973 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.973 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:28.973 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:28.973 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:28.973 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:28.973 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.973 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.973 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:28.973 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.973 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.973 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.973 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.973 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.973 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.973 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.973 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.973 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.973 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.973 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.973 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.973 "name": "Existed_Raid", 00:10:28.973 "uuid": "b13c0ed7-e18b-4ced-9b5c-c10a5f42804b", 00:10:28.973 "strip_size_kb": 64, 00:10:28.973 "state": "configuring", 00:10:28.973 "raid_level": "raid0", 00:10:28.973 "superblock": true, 00:10:28.973 "num_base_bdevs": 4, 00:10:28.973 "num_base_bdevs_discovered": 3, 00:10:28.973 "num_base_bdevs_operational": 4, 00:10:28.973 "base_bdevs_list": [ 00:10:28.973 { 00:10:28.973 "name": "BaseBdev1", 00:10:28.973 "uuid": "5661b689-b523-49e5-9f42-b3c8854df020", 00:10:28.973 "is_configured": true, 00:10:28.973 "data_offset": 2048, 00:10:28.973 "data_size": 63488 00:10:28.973 }, 00:10:28.973 { 00:10:28.973 "name": "BaseBdev2", 00:10:28.973 "uuid": "3e5df4c2-94e7-4b5b-89bf-2182cb9b276f", 00:10:28.973 "is_configured": true, 00:10:28.973 "data_offset": 2048, 00:10:28.973 "data_size": 63488 00:10:28.973 }, 00:10:28.973 { 00:10:28.973 "name": "BaseBdev3", 00:10:28.973 "uuid": "28307eb5-517c-4f47-8542-1ec719b7cf48", 00:10:28.973 "is_configured": true, 00:10:28.973 "data_offset": 2048, 00:10:28.973 "data_size": 63488 00:10:28.973 }, 00:10:28.973 { 00:10:28.973 "name": "BaseBdev4", 00:10:28.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.973 "is_configured": false, 00:10:28.973 "data_offset": 0, 00:10:28.973 "data_size": 0 00:10:28.973 } 00:10:28.973 ] 00:10:28.973 }' 00:10:28.973 09:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.973 09:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.233 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:29.233 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.233 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.233 [2024-10-15 09:09:47.051184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:29.233 [2024-10-15 09:09:47.051554] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:29.233 [2024-10-15 09:09:47.051609] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:29.233 [2024-10-15 09:09:47.051915] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:29.233 [2024-10-15 09:09:47.052120] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:29.233 [2024-10-15 09:09:47.052173] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:29.233 BaseBdev4 00:10:29.233 [2024-10-15 09:09:47.052364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.233 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.233 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:29.233 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:29.233 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:29.233 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:29.233 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:29.233 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:29.233 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:29.233 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.233 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.233 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.233 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:29.233 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.233 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.233 [ 00:10:29.233 { 00:10:29.233 "name": "BaseBdev4", 00:10:29.233 "aliases": [ 00:10:29.233 "9883f204-8bfc-4145-9903-9e4b949fa478" 00:10:29.233 ], 00:10:29.233 "product_name": "Malloc disk", 00:10:29.233 "block_size": 512, 00:10:29.233 "num_blocks": 65536, 00:10:29.233 "uuid": "9883f204-8bfc-4145-9903-9e4b949fa478", 00:10:29.233 "assigned_rate_limits": { 00:10:29.233 "rw_ios_per_sec": 0, 00:10:29.233 "rw_mbytes_per_sec": 0, 00:10:29.233 "r_mbytes_per_sec": 0, 00:10:29.233 "w_mbytes_per_sec": 0 00:10:29.233 }, 00:10:29.233 "claimed": true, 00:10:29.233 "claim_type": "exclusive_write", 00:10:29.233 "zoned": false, 00:10:29.233 "supported_io_types": { 00:10:29.233 "read": true, 00:10:29.233 "write": true, 00:10:29.233 "unmap": true, 00:10:29.233 "flush": true, 00:10:29.233 "reset": true, 00:10:29.233 "nvme_admin": false, 00:10:29.233 "nvme_io": false, 00:10:29.233 "nvme_io_md": false, 00:10:29.233 "write_zeroes": true, 00:10:29.233 "zcopy": true, 00:10:29.233 "get_zone_info": false, 00:10:29.233 "zone_management": false, 00:10:29.233 "zone_append": false, 00:10:29.233 "compare": false, 00:10:29.233 "compare_and_write": false, 00:10:29.233 "abort": true, 00:10:29.233 "seek_hole": false, 00:10:29.233 "seek_data": false, 00:10:29.233 "copy": true, 00:10:29.233 "nvme_iov_md": false 00:10:29.233 }, 00:10:29.233 "memory_domains": [ 00:10:29.233 { 00:10:29.233 "dma_device_id": "system", 00:10:29.233 "dma_device_type": 1 00:10:29.233 }, 00:10:29.233 { 00:10:29.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.233 "dma_device_type": 2 00:10:29.233 } 00:10:29.233 ], 00:10:29.233 "driver_specific": {} 00:10:29.233 } 00:10:29.233 ] 00:10:29.233 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.234 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:29.234 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:29.234 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:29.234 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:29.234 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.234 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.234 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:29.234 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.234 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.234 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.234 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.234 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.234 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.234 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.234 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.234 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.234 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.234 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.494 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.494 "name": "Existed_Raid", 00:10:29.494 "uuid": "b13c0ed7-e18b-4ced-9b5c-c10a5f42804b", 00:10:29.494 "strip_size_kb": 64, 00:10:29.494 "state": "online", 00:10:29.494 "raid_level": "raid0", 00:10:29.494 "superblock": true, 00:10:29.494 "num_base_bdevs": 4, 00:10:29.494 "num_base_bdevs_discovered": 4, 00:10:29.494 "num_base_bdevs_operational": 4, 00:10:29.494 "base_bdevs_list": [ 00:10:29.494 { 00:10:29.494 "name": "BaseBdev1", 00:10:29.494 "uuid": "5661b689-b523-49e5-9f42-b3c8854df020", 00:10:29.494 "is_configured": true, 00:10:29.494 "data_offset": 2048, 00:10:29.494 "data_size": 63488 00:10:29.494 }, 00:10:29.494 { 00:10:29.494 "name": "BaseBdev2", 00:10:29.494 "uuid": "3e5df4c2-94e7-4b5b-89bf-2182cb9b276f", 00:10:29.494 "is_configured": true, 00:10:29.494 "data_offset": 2048, 00:10:29.494 "data_size": 63488 00:10:29.494 }, 00:10:29.494 { 00:10:29.494 "name": "BaseBdev3", 00:10:29.494 "uuid": "28307eb5-517c-4f47-8542-1ec719b7cf48", 00:10:29.494 "is_configured": true, 00:10:29.494 "data_offset": 2048, 00:10:29.494 "data_size": 63488 00:10:29.494 }, 00:10:29.494 { 00:10:29.494 "name": "BaseBdev4", 00:10:29.494 "uuid": "9883f204-8bfc-4145-9903-9e4b949fa478", 00:10:29.494 "is_configured": true, 00:10:29.494 "data_offset": 2048, 00:10:29.494 "data_size": 63488 00:10:29.494 } 00:10:29.494 ] 00:10:29.494 }' 00:10:29.494 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.494 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.753 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:29.753 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:29.753 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:29.753 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:29.753 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:29.753 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:29.753 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:29.753 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:29.754 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.754 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.754 [2024-10-15 09:09:47.446948] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:29.754 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.754 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:29.754 "name": "Existed_Raid", 00:10:29.754 "aliases": [ 00:10:29.754 "b13c0ed7-e18b-4ced-9b5c-c10a5f42804b" 00:10:29.754 ], 00:10:29.754 "product_name": "Raid Volume", 00:10:29.754 "block_size": 512, 00:10:29.754 "num_blocks": 253952, 00:10:29.754 "uuid": "b13c0ed7-e18b-4ced-9b5c-c10a5f42804b", 00:10:29.754 "assigned_rate_limits": { 00:10:29.754 "rw_ios_per_sec": 0, 00:10:29.754 "rw_mbytes_per_sec": 0, 00:10:29.754 "r_mbytes_per_sec": 0, 00:10:29.754 "w_mbytes_per_sec": 0 00:10:29.754 }, 00:10:29.754 "claimed": false, 00:10:29.754 "zoned": false, 00:10:29.754 "supported_io_types": { 00:10:29.754 "read": true, 00:10:29.754 "write": true, 00:10:29.754 "unmap": true, 00:10:29.754 "flush": true, 00:10:29.754 "reset": true, 00:10:29.754 "nvme_admin": false, 00:10:29.754 "nvme_io": false, 00:10:29.754 "nvme_io_md": false, 00:10:29.754 "write_zeroes": true, 00:10:29.754 "zcopy": false, 00:10:29.754 "get_zone_info": false, 00:10:29.754 "zone_management": false, 00:10:29.754 "zone_append": false, 00:10:29.754 "compare": false, 00:10:29.754 "compare_and_write": false, 00:10:29.754 "abort": false, 00:10:29.754 "seek_hole": false, 00:10:29.754 "seek_data": false, 00:10:29.754 "copy": false, 00:10:29.754 "nvme_iov_md": false 00:10:29.754 }, 00:10:29.754 "memory_domains": [ 00:10:29.754 { 00:10:29.754 "dma_device_id": "system", 00:10:29.754 "dma_device_type": 1 00:10:29.754 }, 00:10:29.754 { 00:10:29.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.754 "dma_device_type": 2 00:10:29.754 }, 00:10:29.754 { 00:10:29.754 "dma_device_id": "system", 00:10:29.754 "dma_device_type": 1 00:10:29.754 }, 00:10:29.754 { 00:10:29.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.754 "dma_device_type": 2 00:10:29.754 }, 00:10:29.754 { 00:10:29.754 "dma_device_id": "system", 00:10:29.754 "dma_device_type": 1 00:10:29.754 }, 00:10:29.754 { 00:10:29.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.754 "dma_device_type": 2 00:10:29.754 }, 00:10:29.754 { 00:10:29.754 "dma_device_id": "system", 00:10:29.754 "dma_device_type": 1 00:10:29.754 }, 00:10:29.754 { 00:10:29.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.754 "dma_device_type": 2 00:10:29.754 } 00:10:29.754 ], 00:10:29.754 "driver_specific": { 00:10:29.754 "raid": { 00:10:29.754 "uuid": "b13c0ed7-e18b-4ced-9b5c-c10a5f42804b", 00:10:29.754 "strip_size_kb": 64, 00:10:29.754 "state": "online", 00:10:29.754 "raid_level": "raid0", 00:10:29.754 "superblock": true, 00:10:29.754 "num_base_bdevs": 4, 00:10:29.754 "num_base_bdevs_discovered": 4, 00:10:29.754 "num_base_bdevs_operational": 4, 00:10:29.754 "base_bdevs_list": [ 00:10:29.754 { 00:10:29.754 "name": "BaseBdev1", 00:10:29.754 "uuid": "5661b689-b523-49e5-9f42-b3c8854df020", 00:10:29.754 "is_configured": true, 00:10:29.754 "data_offset": 2048, 00:10:29.754 "data_size": 63488 00:10:29.754 }, 00:10:29.754 { 00:10:29.754 "name": "BaseBdev2", 00:10:29.754 "uuid": "3e5df4c2-94e7-4b5b-89bf-2182cb9b276f", 00:10:29.754 "is_configured": true, 00:10:29.754 "data_offset": 2048, 00:10:29.754 "data_size": 63488 00:10:29.754 }, 00:10:29.754 { 00:10:29.754 "name": "BaseBdev3", 00:10:29.754 "uuid": "28307eb5-517c-4f47-8542-1ec719b7cf48", 00:10:29.754 "is_configured": true, 00:10:29.754 "data_offset": 2048, 00:10:29.754 "data_size": 63488 00:10:29.754 }, 00:10:29.754 { 00:10:29.754 "name": "BaseBdev4", 00:10:29.754 "uuid": "9883f204-8bfc-4145-9903-9e4b949fa478", 00:10:29.754 "is_configured": true, 00:10:29.754 "data_offset": 2048, 00:10:29.754 "data_size": 63488 00:10:29.754 } 00:10:29.754 ] 00:10:29.754 } 00:10:29.754 } 00:10:29.754 }' 00:10:29.754 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:29.754 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:29.754 BaseBdev2 00:10:29.754 BaseBdev3 00:10:29.754 BaseBdev4' 00:10:29.754 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.754 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:29.754 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.754 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:29.754 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.754 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.754 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.754 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.754 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.754 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.754 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.754 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:29.754 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.754 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.754 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.754 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.013 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.013 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.013 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.013 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:30.013 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.013 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.013 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.013 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.013 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.013 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.013 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.013 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:30.013 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.013 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.013 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.014 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.014 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.014 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.014 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:30.014 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.014 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.014 [2024-10-15 09:09:47.778160] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:30.014 [2024-10-15 09:09:47.778214] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:30.014 [2024-10-15 09:09:47.778265] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.014 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.014 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:30.014 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:30.014 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:30.014 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:30.014 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:30.014 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:30.014 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.014 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:30.014 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:30.014 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.014 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:30.014 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.014 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.014 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.014 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.014 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.014 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.014 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.014 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.271 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.271 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.271 "name": "Existed_Raid", 00:10:30.271 "uuid": "b13c0ed7-e18b-4ced-9b5c-c10a5f42804b", 00:10:30.271 "strip_size_kb": 64, 00:10:30.271 "state": "offline", 00:10:30.271 "raid_level": "raid0", 00:10:30.271 "superblock": true, 00:10:30.271 "num_base_bdevs": 4, 00:10:30.271 "num_base_bdevs_discovered": 3, 00:10:30.271 "num_base_bdevs_operational": 3, 00:10:30.271 "base_bdevs_list": [ 00:10:30.271 { 00:10:30.271 "name": null, 00:10:30.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.271 "is_configured": false, 00:10:30.271 "data_offset": 0, 00:10:30.271 "data_size": 63488 00:10:30.271 }, 00:10:30.271 { 00:10:30.272 "name": "BaseBdev2", 00:10:30.272 "uuid": "3e5df4c2-94e7-4b5b-89bf-2182cb9b276f", 00:10:30.272 "is_configured": true, 00:10:30.272 "data_offset": 2048, 00:10:30.272 "data_size": 63488 00:10:30.272 }, 00:10:30.272 { 00:10:30.272 "name": "BaseBdev3", 00:10:30.272 "uuid": "28307eb5-517c-4f47-8542-1ec719b7cf48", 00:10:30.272 "is_configured": true, 00:10:30.272 "data_offset": 2048, 00:10:30.272 "data_size": 63488 00:10:30.272 }, 00:10:30.272 { 00:10:30.272 "name": "BaseBdev4", 00:10:30.272 "uuid": "9883f204-8bfc-4145-9903-9e4b949fa478", 00:10:30.272 "is_configured": true, 00:10:30.272 "data_offset": 2048, 00:10:30.272 "data_size": 63488 00:10:30.272 } 00:10:30.272 ] 00:10:30.272 }' 00:10:30.272 09:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.272 09:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.531 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:30.531 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:30.531 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.531 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.531 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.531 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:30.531 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.531 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:30.531 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:30.531 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:30.531 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.531 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.531 [2024-10-15 09:09:48.362995] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:30.791 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.791 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:30.791 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:30.791 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.791 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.791 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.791 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:30.791 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.791 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:30.791 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:30.791 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:30.791 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.791 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.791 [2024-10-15 09:09:48.528434] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:30.791 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.791 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:30.791 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:30.791 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:30.791 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.791 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.791 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.791 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.791 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:30.791 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:30.791 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:30.791 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.791 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.050 [2024-10-15 09:09:48.688232] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:31.050 [2024-10-15 09:09:48.688309] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.050 BaseBdev2 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.050 [ 00:10:31.050 { 00:10:31.050 "name": "BaseBdev2", 00:10:31.050 "aliases": [ 00:10:31.050 "e89917fc-0077-45c4-97ad-7e3cebdb445d" 00:10:31.050 ], 00:10:31.050 "product_name": "Malloc disk", 00:10:31.050 "block_size": 512, 00:10:31.050 "num_blocks": 65536, 00:10:31.050 "uuid": "e89917fc-0077-45c4-97ad-7e3cebdb445d", 00:10:31.050 "assigned_rate_limits": { 00:10:31.050 "rw_ios_per_sec": 0, 00:10:31.050 "rw_mbytes_per_sec": 0, 00:10:31.050 "r_mbytes_per_sec": 0, 00:10:31.050 "w_mbytes_per_sec": 0 00:10:31.050 }, 00:10:31.050 "claimed": false, 00:10:31.050 "zoned": false, 00:10:31.050 "supported_io_types": { 00:10:31.050 "read": true, 00:10:31.050 "write": true, 00:10:31.050 "unmap": true, 00:10:31.050 "flush": true, 00:10:31.050 "reset": true, 00:10:31.050 "nvme_admin": false, 00:10:31.050 "nvme_io": false, 00:10:31.050 "nvme_io_md": false, 00:10:31.050 "write_zeroes": true, 00:10:31.050 "zcopy": true, 00:10:31.050 "get_zone_info": false, 00:10:31.050 "zone_management": false, 00:10:31.050 "zone_append": false, 00:10:31.050 "compare": false, 00:10:31.050 "compare_and_write": false, 00:10:31.050 "abort": true, 00:10:31.050 "seek_hole": false, 00:10:31.050 "seek_data": false, 00:10:31.050 "copy": true, 00:10:31.050 "nvme_iov_md": false 00:10:31.050 }, 00:10:31.050 "memory_domains": [ 00:10:31.050 { 00:10:31.050 "dma_device_id": "system", 00:10:31.050 "dma_device_type": 1 00:10:31.050 }, 00:10:31.050 { 00:10:31.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.050 "dma_device_type": 2 00:10:31.050 } 00:10:31.050 ], 00:10:31.050 "driver_specific": {} 00:10:31.050 } 00:10:31.050 ] 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.050 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.310 BaseBdev3 00:10:31.310 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.310 09:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:31.310 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:31.310 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:31.310 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:31.310 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:31.310 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:31.310 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:31.310 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.310 09:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.310 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.310 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:31.310 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.310 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.310 [ 00:10:31.310 { 00:10:31.310 "name": "BaseBdev3", 00:10:31.310 "aliases": [ 00:10:31.310 "4f648478-f18b-4c8f-af45-a1975d02f6a3" 00:10:31.310 ], 00:10:31.310 "product_name": "Malloc disk", 00:10:31.310 "block_size": 512, 00:10:31.310 "num_blocks": 65536, 00:10:31.310 "uuid": "4f648478-f18b-4c8f-af45-a1975d02f6a3", 00:10:31.310 "assigned_rate_limits": { 00:10:31.310 "rw_ios_per_sec": 0, 00:10:31.310 "rw_mbytes_per_sec": 0, 00:10:31.310 "r_mbytes_per_sec": 0, 00:10:31.310 "w_mbytes_per_sec": 0 00:10:31.310 }, 00:10:31.310 "claimed": false, 00:10:31.310 "zoned": false, 00:10:31.310 "supported_io_types": { 00:10:31.310 "read": true, 00:10:31.310 "write": true, 00:10:31.310 "unmap": true, 00:10:31.310 "flush": true, 00:10:31.310 "reset": true, 00:10:31.310 "nvme_admin": false, 00:10:31.310 "nvme_io": false, 00:10:31.310 "nvme_io_md": false, 00:10:31.310 "write_zeroes": true, 00:10:31.310 "zcopy": true, 00:10:31.310 "get_zone_info": false, 00:10:31.310 "zone_management": false, 00:10:31.310 "zone_append": false, 00:10:31.310 "compare": false, 00:10:31.310 "compare_and_write": false, 00:10:31.310 "abort": true, 00:10:31.310 "seek_hole": false, 00:10:31.310 "seek_data": false, 00:10:31.310 "copy": true, 00:10:31.310 "nvme_iov_md": false 00:10:31.310 }, 00:10:31.310 "memory_domains": [ 00:10:31.310 { 00:10:31.310 "dma_device_id": "system", 00:10:31.310 "dma_device_type": 1 00:10:31.310 }, 00:10:31.310 { 00:10:31.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.310 "dma_device_type": 2 00:10:31.310 } 00:10:31.310 ], 00:10:31.310 "driver_specific": {} 00:10:31.310 } 00:10:31.310 ] 00:10:31.310 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.310 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:31.310 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:31.310 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:31.310 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:31.310 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.310 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.310 BaseBdev4 00:10:31.310 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.310 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:31.310 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:31.310 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:31.310 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:31.310 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:31.310 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:31.310 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:31.310 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.310 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.310 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.310 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:31.310 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.310 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.310 [ 00:10:31.310 { 00:10:31.310 "name": "BaseBdev4", 00:10:31.310 "aliases": [ 00:10:31.310 "d075427f-42c2-4855-8525-887a77e590d0" 00:10:31.310 ], 00:10:31.310 "product_name": "Malloc disk", 00:10:31.310 "block_size": 512, 00:10:31.310 "num_blocks": 65536, 00:10:31.311 "uuid": "d075427f-42c2-4855-8525-887a77e590d0", 00:10:31.311 "assigned_rate_limits": { 00:10:31.311 "rw_ios_per_sec": 0, 00:10:31.311 "rw_mbytes_per_sec": 0, 00:10:31.311 "r_mbytes_per_sec": 0, 00:10:31.311 "w_mbytes_per_sec": 0 00:10:31.311 }, 00:10:31.311 "claimed": false, 00:10:31.311 "zoned": false, 00:10:31.311 "supported_io_types": { 00:10:31.311 "read": true, 00:10:31.311 "write": true, 00:10:31.311 "unmap": true, 00:10:31.311 "flush": true, 00:10:31.311 "reset": true, 00:10:31.311 "nvme_admin": false, 00:10:31.311 "nvme_io": false, 00:10:31.311 "nvme_io_md": false, 00:10:31.311 "write_zeroes": true, 00:10:31.311 "zcopy": true, 00:10:31.311 "get_zone_info": false, 00:10:31.311 "zone_management": false, 00:10:31.311 "zone_append": false, 00:10:31.311 "compare": false, 00:10:31.311 "compare_and_write": false, 00:10:31.311 "abort": true, 00:10:31.311 "seek_hole": false, 00:10:31.311 "seek_data": false, 00:10:31.311 "copy": true, 00:10:31.311 "nvme_iov_md": false 00:10:31.311 }, 00:10:31.311 "memory_domains": [ 00:10:31.311 { 00:10:31.311 "dma_device_id": "system", 00:10:31.311 "dma_device_type": 1 00:10:31.311 }, 00:10:31.311 { 00:10:31.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.311 "dma_device_type": 2 00:10:31.311 } 00:10:31.311 ], 00:10:31.311 "driver_specific": {} 00:10:31.311 } 00:10:31.311 ] 00:10:31.311 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.311 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:31.311 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:31.311 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:31.311 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:31.311 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.311 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.311 [2024-10-15 09:09:49.123510] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:31.311 [2024-10-15 09:09:49.123673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:31.311 [2024-10-15 09:09:49.123743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:31.311 [2024-10-15 09:09:49.125794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:31.311 [2024-10-15 09:09:49.125898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:31.311 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.311 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:31.311 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.311 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.311 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.311 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.311 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.311 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.311 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.311 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.311 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.311 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.311 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.311 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.311 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.311 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.311 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.311 "name": "Existed_Raid", 00:10:31.311 "uuid": "9538d8c5-a54d-4098-a470-dc9b9756f74b", 00:10:31.311 "strip_size_kb": 64, 00:10:31.311 "state": "configuring", 00:10:31.311 "raid_level": "raid0", 00:10:31.311 "superblock": true, 00:10:31.311 "num_base_bdevs": 4, 00:10:31.311 "num_base_bdevs_discovered": 3, 00:10:31.311 "num_base_bdevs_operational": 4, 00:10:31.311 "base_bdevs_list": [ 00:10:31.311 { 00:10:31.311 "name": "BaseBdev1", 00:10:31.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.311 "is_configured": false, 00:10:31.311 "data_offset": 0, 00:10:31.311 "data_size": 0 00:10:31.311 }, 00:10:31.311 { 00:10:31.311 "name": "BaseBdev2", 00:10:31.311 "uuid": "e89917fc-0077-45c4-97ad-7e3cebdb445d", 00:10:31.311 "is_configured": true, 00:10:31.311 "data_offset": 2048, 00:10:31.311 "data_size": 63488 00:10:31.311 }, 00:10:31.311 { 00:10:31.311 "name": "BaseBdev3", 00:10:31.311 "uuid": "4f648478-f18b-4c8f-af45-a1975d02f6a3", 00:10:31.311 "is_configured": true, 00:10:31.311 "data_offset": 2048, 00:10:31.311 "data_size": 63488 00:10:31.311 }, 00:10:31.311 { 00:10:31.311 "name": "BaseBdev4", 00:10:31.311 "uuid": "d075427f-42c2-4855-8525-887a77e590d0", 00:10:31.311 "is_configured": true, 00:10:31.311 "data_offset": 2048, 00:10:31.311 "data_size": 63488 00:10:31.311 } 00:10:31.311 ] 00:10:31.311 }' 00:10:31.311 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.311 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.879 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:31.879 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.879 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.879 [2024-10-15 09:09:49.554794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:31.879 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.879 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:31.879 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.879 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.879 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.879 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.879 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.879 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.879 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.879 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.879 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.879 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.879 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.879 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.879 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.879 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.879 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.879 "name": "Existed_Raid", 00:10:31.879 "uuid": "9538d8c5-a54d-4098-a470-dc9b9756f74b", 00:10:31.879 "strip_size_kb": 64, 00:10:31.879 "state": "configuring", 00:10:31.879 "raid_level": "raid0", 00:10:31.879 "superblock": true, 00:10:31.879 "num_base_bdevs": 4, 00:10:31.879 "num_base_bdevs_discovered": 2, 00:10:31.879 "num_base_bdevs_operational": 4, 00:10:31.879 "base_bdevs_list": [ 00:10:31.879 { 00:10:31.879 "name": "BaseBdev1", 00:10:31.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.879 "is_configured": false, 00:10:31.879 "data_offset": 0, 00:10:31.879 "data_size": 0 00:10:31.879 }, 00:10:31.879 { 00:10:31.879 "name": null, 00:10:31.879 "uuid": "e89917fc-0077-45c4-97ad-7e3cebdb445d", 00:10:31.879 "is_configured": false, 00:10:31.879 "data_offset": 0, 00:10:31.879 "data_size": 63488 00:10:31.879 }, 00:10:31.879 { 00:10:31.879 "name": "BaseBdev3", 00:10:31.879 "uuid": "4f648478-f18b-4c8f-af45-a1975d02f6a3", 00:10:31.879 "is_configured": true, 00:10:31.879 "data_offset": 2048, 00:10:31.879 "data_size": 63488 00:10:31.879 }, 00:10:31.879 { 00:10:31.879 "name": "BaseBdev4", 00:10:31.879 "uuid": "d075427f-42c2-4855-8525-887a77e590d0", 00:10:31.879 "is_configured": true, 00:10:31.879 "data_offset": 2048, 00:10:31.879 "data_size": 63488 00:10:31.879 } 00:10:31.879 ] 00:10:31.879 }' 00:10:31.879 09:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.879 09:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.138 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:32.138 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.138 09:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.138 09:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.398 [2024-10-15 09:09:50.094105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:32.398 BaseBdev1 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.398 [ 00:10:32.398 { 00:10:32.398 "name": "BaseBdev1", 00:10:32.398 "aliases": [ 00:10:32.398 "12d24fe2-0549-4339-a84e-6c563d439a39" 00:10:32.398 ], 00:10:32.398 "product_name": "Malloc disk", 00:10:32.398 "block_size": 512, 00:10:32.398 "num_blocks": 65536, 00:10:32.398 "uuid": "12d24fe2-0549-4339-a84e-6c563d439a39", 00:10:32.398 "assigned_rate_limits": { 00:10:32.398 "rw_ios_per_sec": 0, 00:10:32.398 "rw_mbytes_per_sec": 0, 00:10:32.398 "r_mbytes_per_sec": 0, 00:10:32.398 "w_mbytes_per_sec": 0 00:10:32.398 }, 00:10:32.398 "claimed": true, 00:10:32.398 "claim_type": "exclusive_write", 00:10:32.398 "zoned": false, 00:10:32.398 "supported_io_types": { 00:10:32.398 "read": true, 00:10:32.398 "write": true, 00:10:32.398 "unmap": true, 00:10:32.398 "flush": true, 00:10:32.398 "reset": true, 00:10:32.398 "nvme_admin": false, 00:10:32.398 "nvme_io": false, 00:10:32.398 "nvme_io_md": false, 00:10:32.398 "write_zeroes": true, 00:10:32.398 "zcopy": true, 00:10:32.398 "get_zone_info": false, 00:10:32.398 "zone_management": false, 00:10:32.398 "zone_append": false, 00:10:32.398 "compare": false, 00:10:32.398 "compare_and_write": false, 00:10:32.398 "abort": true, 00:10:32.398 "seek_hole": false, 00:10:32.398 "seek_data": false, 00:10:32.398 "copy": true, 00:10:32.398 "nvme_iov_md": false 00:10:32.398 }, 00:10:32.398 "memory_domains": [ 00:10:32.398 { 00:10:32.398 "dma_device_id": "system", 00:10:32.398 "dma_device_type": 1 00:10:32.398 }, 00:10:32.398 { 00:10:32.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.398 "dma_device_type": 2 00:10:32.398 } 00:10:32.398 ], 00:10:32.398 "driver_specific": {} 00:10:32.398 } 00:10:32.398 ] 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.398 "name": "Existed_Raid", 00:10:32.398 "uuid": "9538d8c5-a54d-4098-a470-dc9b9756f74b", 00:10:32.398 "strip_size_kb": 64, 00:10:32.398 "state": "configuring", 00:10:32.398 "raid_level": "raid0", 00:10:32.398 "superblock": true, 00:10:32.398 "num_base_bdevs": 4, 00:10:32.398 "num_base_bdevs_discovered": 3, 00:10:32.398 "num_base_bdevs_operational": 4, 00:10:32.398 "base_bdevs_list": [ 00:10:32.398 { 00:10:32.398 "name": "BaseBdev1", 00:10:32.398 "uuid": "12d24fe2-0549-4339-a84e-6c563d439a39", 00:10:32.398 "is_configured": true, 00:10:32.398 "data_offset": 2048, 00:10:32.398 "data_size": 63488 00:10:32.398 }, 00:10:32.398 { 00:10:32.398 "name": null, 00:10:32.398 "uuid": "e89917fc-0077-45c4-97ad-7e3cebdb445d", 00:10:32.398 "is_configured": false, 00:10:32.398 "data_offset": 0, 00:10:32.398 "data_size": 63488 00:10:32.398 }, 00:10:32.398 { 00:10:32.398 "name": "BaseBdev3", 00:10:32.398 "uuid": "4f648478-f18b-4c8f-af45-a1975d02f6a3", 00:10:32.398 "is_configured": true, 00:10:32.398 "data_offset": 2048, 00:10:32.398 "data_size": 63488 00:10:32.398 }, 00:10:32.398 { 00:10:32.398 "name": "BaseBdev4", 00:10:32.398 "uuid": "d075427f-42c2-4855-8525-887a77e590d0", 00:10:32.398 "is_configured": true, 00:10:32.398 "data_offset": 2048, 00:10:32.398 "data_size": 63488 00:10:32.398 } 00:10:32.398 ] 00:10:32.398 }' 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.398 09:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.967 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:32.967 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.967 09:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.967 09:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.967 09:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.967 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:32.967 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:32.967 09:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.967 09:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.967 [2024-10-15 09:09:50.613376] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:32.967 09:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.967 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:32.967 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.967 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.967 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.967 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.967 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.967 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.967 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.967 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.967 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.967 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.967 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.967 09:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.967 09:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.967 09:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.967 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.967 "name": "Existed_Raid", 00:10:32.967 "uuid": "9538d8c5-a54d-4098-a470-dc9b9756f74b", 00:10:32.967 "strip_size_kb": 64, 00:10:32.967 "state": "configuring", 00:10:32.967 "raid_level": "raid0", 00:10:32.967 "superblock": true, 00:10:32.967 "num_base_bdevs": 4, 00:10:32.967 "num_base_bdevs_discovered": 2, 00:10:32.967 "num_base_bdevs_operational": 4, 00:10:32.967 "base_bdevs_list": [ 00:10:32.967 { 00:10:32.967 "name": "BaseBdev1", 00:10:32.967 "uuid": "12d24fe2-0549-4339-a84e-6c563d439a39", 00:10:32.967 "is_configured": true, 00:10:32.967 "data_offset": 2048, 00:10:32.967 "data_size": 63488 00:10:32.967 }, 00:10:32.967 { 00:10:32.967 "name": null, 00:10:32.967 "uuid": "e89917fc-0077-45c4-97ad-7e3cebdb445d", 00:10:32.967 "is_configured": false, 00:10:32.967 "data_offset": 0, 00:10:32.967 "data_size": 63488 00:10:32.967 }, 00:10:32.967 { 00:10:32.967 "name": null, 00:10:32.967 "uuid": "4f648478-f18b-4c8f-af45-a1975d02f6a3", 00:10:32.967 "is_configured": false, 00:10:32.967 "data_offset": 0, 00:10:32.967 "data_size": 63488 00:10:32.967 }, 00:10:32.967 { 00:10:32.967 "name": "BaseBdev4", 00:10:32.967 "uuid": "d075427f-42c2-4855-8525-887a77e590d0", 00:10:32.967 "is_configured": true, 00:10:32.967 "data_offset": 2048, 00:10:32.967 "data_size": 63488 00:10:32.967 } 00:10:32.967 ] 00:10:32.967 }' 00:10:32.967 09:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.967 09:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.226 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.226 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:33.226 09:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.226 09:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.226 09:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.486 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:33.486 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:33.486 09:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.486 09:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.486 [2024-10-15 09:09:51.136955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:33.486 09:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.486 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:33.486 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.486 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.486 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.486 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.486 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.486 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.486 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.486 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.486 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.486 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.486 09:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.486 09:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.486 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.486 09:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.486 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.486 "name": "Existed_Raid", 00:10:33.486 "uuid": "9538d8c5-a54d-4098-a470-dc9b9756f74b", 00:10:33.486 "strip_size_kb": 64, 00:10:33.486 "state": "configuring", 00:10:33.487 "raid_level": "raid0", 00:10:33.487 "superblock": true, 00:10:33.487 "num_base_bdevs": 4, 00:10:33.487 "num_base_bdevs_discovered": 3, 00:10:33.487 "num_base_bdevs_operational": 4, 00:10:33.487 "base_bdevs_list": [ 00:10:33.487 { 00:10:33.487 "name": "BaseBdev1", 00:10:33.487 "uuid": "12d24fe2-0549-4339-a84e-6c563d439a39", 00:10:33.487 "is_configured": true, 00:10:33.487 "data_offset": 2048, 00:10:33.487 "data_size": 63488 00:10:33.487 }, 00:10:33.487 { 00:10:33.487 "name": null, 00:10:33.487 "uuid": "e89917fc-0077-45c4-97ad-7e3cebdb445d", 00:10:33.487 "is_configured": false, 00:10:33.487 "data_offset": 0, 00:10:33.487 "data_size": 63488 00:10:33.487 }, 00:10:33.487 { 00:10:33.487 "name": "BaseBdev3", 00:10:33.487 "uuid": "4f648478-f18b-4c8f-af45-a1975d02f6a3", 00:10:33.487 "is_configured": true, 00:10:33.487 "data_offset": 2048, 00:10:33.487 "data_size": 63488 00:10:33.487 }, 00:10:33.487 { 00:10:33.487 "name": "BaseBdev4", 00:10:33.487 "uuid": "d075427f-42c2-4855-8525-887a77e590d0", 00:10:33.487 "is_configured": true, 00:10:33.487 "data_offset": 2048, 00:10:33.487 "data_size": 63488 00:10:33.487 } 00:10:33.487 ] 00:10:33.487 }' 00:10:33.487 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.487 09:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.746 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.746 09:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.746 09:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.746 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:33.746 09:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.746 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:33.746 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:33.746 09:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.746 09:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.746 [2024-10-15 09:09:51.536887] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:33.746 09:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.746 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:33.746 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.746 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.746 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.746 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.746 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.747 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.747 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.747 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.747 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.007 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.007 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.007 09:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.007 09:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.007 09:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.007 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.007 "name": "Existed_Raid", 00:10:34.007 "uuid": "9538d8c5-a54d-4098-a470-dc9b9756f74b", 00:10:34.007 "strip_size_kb": 64, 00:10:34.007 "state": "configuring", 00:10:34.007 "raid_level": "raid0", 00:10:34.007 "superblock": true, 00:10:34.007 "num_base_bdevs": 4, 00:10:34.007 "num_base_bdevs_discovered": 2, 00:10:34.007 "num_base_bdevs_operational": 4, 00:10:34.007 "base_bdevs_list": [ 00:10:34.007 { 00:10:34.007 "name": null, 00:10:34.007 "uuid": "12d24fe2-0549-4339-a84e-6c563d439a39", 00:10:34.007 "is_configured": false, 00:10:34.007 "data_offset": 0, 00:10:34.007 "data_size": 63488 00:10:34.007 }, 00:10:34.007 { 00:10:34.007 "name": null, 00:10:34.007 "uuid": "e89917fc-0077-45c4-97ad-7e3cebdb445d", 00:10:34.007 "is_configured": false, 00:10:34.007 "data_offset": 0, 00:10:34.007 "data_size": 63488 00:10:34.007 }, 00:10:34.007 { 00:10:34.007 "name": "BaseBdev3", 00:10:34.007 "uuid": "4f648478-f18b-4c8f-af45-a1975d02f6a3", 00:10:34.007 "is_configured": true, 00:10:34.007 "data_offset": 2048, 00:10:34.007 "data_size": 63488 00:10:34.007 }, 00:10:34.007 { 00:10:34.007 "name": "BaseBdev4", 00:10:34.007 "uuid": "d075427f-42c2-4855-8525-887a77e590d0", 00:10:34.007 "is_configured": true, 00:10:34.007 "data_offset": 2048, 00:10:34.007 "data_size": 63488 00:10:34.007 } 00:10:34.007 ] 00:10:34.007 }' 00:10:34.007 09:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.007 09:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.266 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.266 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.266 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:34.267 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.267 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.267 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:34.267 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:34.267 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.267 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.267 [2024-10-15 09:09:52.154977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:34.526 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.526 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:34.526 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.526 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.526 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.526 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.526 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.526 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.526 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.526 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.526 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.526 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.526 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.526 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.526 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.526 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.526 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.526 "name": "Existed_Raid", 00:10:34.526 "uuid": "9538d8c5-a54d-4098-a470-dc9b9756f74b", 00:10:34.526 "strip_size_kb": 64, 00:10:34.526 "state": "configuring", 00:10:34.526 "raid_level": "raid0", 00:10:34.526 "superblock": true, 00:10:34.526 "num_base_bdevs": 4, 00:10:34.526 "num_base_bdevs_discovered": 3, 00:10:34.526 "num_base_bdevs_operational": 4, 00:10:34.526 "base_bdevs_list": [ 00:10:34.526 { 00:10:34.526 "name": null, 00:10:34.526 "uuid": "12d24fe2-0549-4339-a84e-6c563d439a39", 00:10:34.526 "is_configured": false, 00:10:34.526 "data_offset": 0, 00:10:34.526 "data_size": 63488 00:10:34.526 }, 00:10:34.526 { 00:10:34.526 "name": "BaseBdev2", 00:10:34.526 "uuid": "e89917fc-0077-45c4-97ad-7e3cebdb445d", 00:10:34.526 "is_configured": true, 00:10:34.526 "data_offset": 2048, 00:10:34.526 "data_size": 63488 00:10:34.526 }, 00:10:34.526 { 00:10:34.526 "name": "BaseBdev3", 00:10:34.526 "uuid": "4f648478-f18b-4c8f-af45-a1975d02f6a3", 00:10:34.526 "is_configured": true, 00:10:34.526 "data_offset": 2048, 00:10:34.526 "data_size": 63488 00:10:34.526 }, 00:10:34.526 { 00:10:34.526 "name": "BaseBdev4", 00:10:34.526 "uuid": "d075427f-42c2-4855-8525-887a77e590d0", 00:10:34.526 "is_configured": true, 00:10:34.526 "data_offset": 2048, 00:10:34.526 "data_size": 63488 00:10:34.526 } 00:10:34.526 ] 00:10:34.526 }' 00:10:34.526 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.526 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.786 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.786 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:34.786 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.786 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.786 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.786 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:34.786 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.786 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:34.786 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.786 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.786 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.786 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 12d24fe2-0549-4339-a84e-6c563d439a39 00:10:34.786 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.046 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.046 [2024-10-15 09:09:52.725066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:35.046 [2024-10-15 09:09:52.725440] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:35.046 [2024-10-15 09:09:52.725478] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:35.046 [2024-10-15 09:09:52.725785] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:35.046 [2024-10-15 09:09:52.725978] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:35.046 [2024-10-15 09:09:52.726028] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:35.046 [2024-10-15 09:09:52.726204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:35.046 NewBaseBdev 00:10:35.046 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.046 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:35.046 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:35.046 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:35.046 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:35.046 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:35.046 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:35.046 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:35.046 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.046 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.046 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.046 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:35.046 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.046 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.046 [ 00:10:35.046 { 00:10:35.046 "name": "NewBaseBdev", 00:10:35.046 "aliases": [ 00:10:35.046 "12d24fe2-0549-4339-a84e-6c563d439a39" 00:10:35.046 ], 00:10:35.046 "product_name": "Malloc disk", 00:10:35.046 "block_size": 512, 00:10:35.046 "num_blocks": 65536, 00:10:35.046 "uuid": "12d24fe2-0549-4339-a84e-6c563d439a39", 00:10:35.046 "assigned_rate_limits": { 00:10:35.046 "rw_ios_per_sec": 0, 00:10:35.046 "rw_mbytes_per_sec": 0, 00:10:35.046 "r_mbytes_per_sec": 0, 00:10:35.046 "w_mbytes_per_sec": 0 00:10:35.046 }, 00:10:35.046 "claimed": true, 00:10:35.046 "claim_type": "exclusive_write", 00:10:35.046 "zoned": false, 00:10:35.046 "supported_io_types": { 00:10:35.046 "read": true, 00:10:35.046 "write": true, 00:10:35.046 "unmap": true, 00:10:35.046 "flush": true, 00:10:35.046 "reset": true, 00:10:35.046 "nvme_admin": false, 00:10:35.046 "nvme_io": false, 00:10:35.046 "nvme_io_md": false, 00:10:35.046 "write_zeroes": true, 00:10:35.046 "zcopy": true, 00:10:35.046 "get_zone_info": false, 00:10:35.046 "zone_management": false, 00:10:35.046 "zone_append": false, 00:10:35.046 "compare": false, 00:10:35.046 "compare_and_write": false, 00:10:35.046 "abort": true, 00:10:35.046 "seek_hole": false, 00:10:35.046 "seek_data": false, 00:10:35.046 "copy": true, 00:10:35.046 "nvme_iov_md": false 00:10:35.046 }, 00:10:35.046 "memory_domains": [ 00:10:35.046 { 00:10:35.046 "dma_device_id": "system", 00:10:35.046 "dma_device_type": 1 00:10:35.046 }, 00:10:35.046 { 00:10:35.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.046 "dma_device_type": 2 00:10:35.046 } 00:10:35.046 ], 00:10:35.046 "driver_specific": {} 00:10:35.046 } 00:10:35.046 ] 00:10:35.046 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.046 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:35.046 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:35.046 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.046 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:35.046 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:35.046 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.046 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.046 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.046 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.046 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.046 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.046 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.046 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.046 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.046 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.046 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.046 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.046 "name": "Existed_Raid", 00:10:35.046 "uuid": "9538d8c5-a54d-4098-a470-dc9b9756f74b", 00:10:35.046 "strip_size_kb": 64, 00:10:35.046 "state": "online", 00:10:35.046 "raid_level": "raid0", 00:10:35.046 "superblock": true, 00:10:35.046 "num_base_bdevs": 4, 00:10:35.046 "num_base_bdevs_discovered": 4, 00:10:35.046 "num_base_bdevs_operational": 4, 00:10:35.046 "base_bdevs_list": [ 00:10:35.046 { 00:10:35.046 "name": "NewBaseBdev", 00:10:35.046 "uuid": "12d24fe2-0549-4339-a84e-6c563d439a39", 00:10:35.046 "is_configured": true, 00:10:35.046 "data_offset": 2048, 00:10:35.046 "data_size": 63488 00:10:35.046 }, 00:10:35.046 { 00:10:35.046 "name": "BaseBdev2", 00:10:35.047 "uuid": "e89917fc-0077-45c4-97ad-7e3cebdb445d", 00:10:35.047 "is_configured": true, 00:10:35.047 "data_offset": 2048, 00:10:35.047 "data_size": 63488 00:10:35.047 }, 00:10:35.047 { 00:10:35.047 "name": "BaseBdev3", 00:10:35.047 "uuid": "4f648478-f18b-4c8f-af45-a1975d02f6a3", 00:10:35.047 "is_configured": true, 00:10:35.047 "data_offset": 2048, 00:10:35.047 "data_size": 63488 00:10:35.047 }, 00:10:35.047 { 00:10:35.047 "name": "BaseBdev4", 00:10:35.047 "uuid": "d075427f-42c2-4855-8525-887a77e590d0", 00:10:35.047 "is_configured": true, 00:10:35.047 "data_offset": 2048, 00:10:35.047 "data_size": 63488 00:10:35.047 } 00:10:35.047 ] 00:10:35.047 }' 00:10:35.047 09:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.047 09:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.306 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:35.306 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:35.306 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:35.306 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:35.306 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:35.306 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:35.306 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:35.306 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:35.306 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.306 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.306 [2024-10-15 09:09:53.129293] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:35.306 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.306 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:35.306 "name": "Existed_Raid", 00:10:35.306 "aliases": [ 00:10:35.306 "9538d8c5-a54d-4098-a470-dc9b9756f74b" 00:10:35.306 ], 00:10:35.306 "product_name": "Raid Volume", 00:10:35.306 "block_size": 512, 00:10:35.306 "num_blocks": 253952, 00:10:35.306 "uuid": "9538d8c5-a54d-4098-a470-dc9b9756f74b", 00:10:35.306 "assigned_rate_limits": { 00:10:35.306 "rw_ios_per_sec": 0, 00:10:35.306 "rw_mbytes_per_sec": 0, 00:10:35.306 "r_mbytes_per_sec": 0, 00:10:35.306 "w_mbytes_per_sec": 0 00:10:35.306 }, 00:10:35.306 "claimed": false, 00:10:35.306 "zoned": false, 00:10:35.306 "supported_io_types": { 00:10:35.306 "read": true, 00:10:35.306 "write": true, 00:10:35.306 "unmap": true, 00:10:35.306 "flush": true, 00:10:35.306 "reset": true, 00:10:35.306 "nvme_admin": false, 00:10:35.306 "nvme_io": false, 00:10:35.306 "nvme_io_md": false, 00:10:35.306 "write_zeroes": true, 00:10:35.306 "zcopy": false, 00:10:35.306 "get_zone_info": false, 00:10:35.306 "zone_management": false, 00:10:35.306 "zone_append": false, 00:10:35.306 "compare": false, 00:10:35.306 "compare_and_write": false, 00:10:35.306 "abort": false, 00:10:35.306 "seek_hole": false, 00:10:35.306 "seek_data": false, 00:10:35.306 "copy": false, 00:10:35.306 "nvme_iov_md": false 00:10:35.306 }, 00:10:35.306 "memory_domains": [ 00:10:35.306 { 00:10:35.306 "dma_device_id": "system", 00:10:35.306 "dma_device_type": 1 00:10:35.306 }, 00:10:35.306 { 00:10:35.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.306 "dma_device_type": 2 00:10:35.306 }, 00:10:35.306 { 00:10:35.306 "dma_device_id": "system", 00:10:35.306 "dma_device_type": 1 00:10:35.306 }, 00:10:35.306 { 00:10:35.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.306 "dma_device_type": 2 00:10:35.306 }, 00:10:35.306 { 00:10:35.306 "dma_device_id": "system", 00:10:35.306 "dma_device_type": 1 00:10:35.306 }, 00:10:35.306 { 00:10:35.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.306 "dma_device_type": 2 00:10:35.306 }, 00:10:35.306 { 00:10:35.306 "dma_device_id": "system", 00:10:35.306 "dma_device_type": 1 00:10:35.306 }, 00:10:35.306 { 00:10:35.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.306 "dma_device_type": 2 00:10:35.306 } 00:10:35.306 ], 00:10:35.306 "driver_specific": { 00:10:35.306 "raid": { 00:10:35.306 "uuid": "9538d8c5-a54d-4098-a470-dc9b9756f74b", 00:10:35.306 "strip_size_kb": 64, 00:10:35.306 "state": "online", 00:10:35.306 "raid_level": "raid0", 00:10:35.306 "superblock": true, 00:10:35.306 "num_base_bdevs": 4, 00:10:35.306 "num_base_bdevs_discovered": 4, 00:10:35.306 "num_base_bdevs_operational": 4, 00:10:35.306 "base_bdevs_list": [ 00:10:35.306 { 00:10:35.306 "name": "NewBaseBdev", 00:10:35.306 "uuid": "12d24fe2-0549-4339-a84e-6c563d439a39", 00:10:35.306 "is_configured": true, 00:10:35.306 "data_offset": 2048, 00:10:35.306 "data_size": 63488 00:10:35.306 }, 00:10:35.306 { 00:10:35.306 "name": "BaseBdev2", 00:10:35.306 "uuid": "e89917fc-0077-45c4-97ad-7e3cebdb445d", 00:10:35.306 "is_configured": true, 00:10:35.306 "data_offset": 2048, 00:10:35.306 "data_size": 63488 00:10:35.306 }, 00:10:35.307 { 00:10:35.307 "name": "BaseBdev3", 00:10:35.307 "uuid": "4f648478-f18b-4c8f-af45-a1975d02f6a3", 00:10:35.307 "is_configured": true, 00:10:35.307 "data_offset": 2048, 00:10:35.307 "data_size": 63488 00:10:35.307 }, 00:10:35.307 { 00:10:35.307 "name": "BaseBdev4", 00:10:35.307 "uuid": "d075427f-42c2-4855-8525-887a77e590d0", 00:10:35.307 "is_configured": true, 00:10:35.307 "data_offset": 2048, 00:10:35.307 "data_size": 63488 00:10:35.307 } 00:10:35.307 ] 00:10:35.307 } 00:10:35.307 } 00:10:35.307 }' 00:10:35.307 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:35.566 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:35.566 BaseBdev2 00:10:35.566 BaseBdev3 00:10:35.566 BaseBdev4' 00:10:35.566 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.566 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:35.566 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.566 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:35.566 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.566 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.566 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.566 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.566 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.566 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.566 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.566 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:35.566 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.566 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.566 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.566 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.566 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.566 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.566 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.566 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:35.566 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.566 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.566 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.566 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.566 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.566 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.566 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.566 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.566 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:35.566 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.566 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.566 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.825 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.825 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.825 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:35.825 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.825 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.825 [2024-10-15 09:09:53.472432] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:35.825 [2024-10-15 09:09:53.472502] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:35.825 [2024-10-15 09:09:53.472599] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:35.825 [2024-10-15 09:09:53.472677] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:35.825 [2024-10-15 09:09:53.472689] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:35.825 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.825 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70146 00:10:35.825 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 70146 ']' 00:10:35.825 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 70146 00:10:35.825 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:35.825 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:35.825 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70146 00:10:35.826 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:35.826 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:35.826 killing process with pid 70146 00:10:35.826 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70146' 00:10:35.826 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 70146 00:10:35.826 [2024-10-15 09:09:53.521885] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:35.826 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 70146 00:10:36.085 [2024-10-15 09:09:53.935891] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:37.463 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:37.463 00:10:37.463 real 0m11.568s 00:10:37.463 user 0m18.097s 00:10:37.463 sys 0m2.128s 00:10:37.463 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:37.463 ************************************ 00:10:37.463 END TEST raid_state_function_test_sb 00:10:37.463 ************************************ 00:10:37.463 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.463 09:09:55 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:37.463 09:09:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:37.463 09:09:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:37.463 09:09:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:37.463 ************************************ 00:10:37.463 START TEST raid_superblock_test 00:10:37.463 ************************************ 00:10:37.463 09:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:10:37.463 09:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:37.463 09:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:37.463 09:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:37.463 09:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:37.463 09:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:37.463 09:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:37.463 09:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:37.463 09:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:37.463 09:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:37.463 09:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:37.463 09:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:37.463 09:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:37.463 09:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:37.463 09:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:37.463 09:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:37.463 09:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:37.463 09:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70817 00:10:37.463 09:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:37.463 09:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70817 00:10:37.463 09:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 70817 ']' 00:10:37.463 09:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.463 09:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:37.463 09:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.463 09:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:37.463 09:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.463 [2024-10-15 09:09:55.310477] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:10:37.463 [2024-10-15 09:09:55.310617] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70817 ] 00:10:37.723 [2024-10-15 09:09:55.470132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.723 [2024-10-15 09:09:55.599292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.983 [2024-10-15 09:09:55.815808] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.983 [2024-10-15 09:09:55.815862] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:38.552 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:38.552 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:38.552 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:38.552 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:38.552 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:38.552 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:38.552 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:38.552 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:38.552 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:38.552 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:38.552 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:38.552 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.552 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.552 malloc1 00:10:38.552 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.552 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:38.552 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.552 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.552 [2024-10-15 09:09:56.216761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:38.552 [2024-10-15 09:09:56.216932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.552 [2024-10-15 09:09:56.216980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:38.552 [2024-10-15 09:09:56.217012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.552 [2024-10-15 09:09:56.219266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.552 [2024-10-15 09:09:56.219369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:38.553 pt1 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.553 malloc2 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.553 [2024-10-15 09:09:56.275079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:38.553 [2024-10-15 09:09:56.275163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.553 [2024-10-15 09:09:56.275186] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:38.553 [2024-10-15 09:09:56.275195] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.553 [2024-10-15 09:09:56.277371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.553 [2024-10-15 09:09:56.277419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:38.553 pt2 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.553 malloc3 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.553 [2024-10-15 09:09:56.343493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:38.553 [2024-10-15 09:09:56.343659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.553 [2024-10-15 09:09:56.343710] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:38.553 [2024-10-15 09:09:56.343746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.553 [2024-10-15 09:09:56.345980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.553 [2024-10-15 09:09:56.346066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:38.553 pt3 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.553 malloc4 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.553 [2024-10-15 09:09:56.404536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:38.553 [2024-10-15 09:09:56.404715] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.553 [2024-10-15 09:09:56.404763] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:38.553 [2024-10-15 09:09:56.404810] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.553 [2024-10-15 09:09:56.407033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.553 [2024-10-15 09:09:56.407122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:38.553 pt4 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.553 [2024-10-15 09:09:56.416542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:38.553 [2024-10-15 09:09:56.418420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:38.553 [2024-10-15 09:09:56.418482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:38.553 [2024-10-15 09:09:56.418543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:38.553 [2024-10-15 09:09:56.418742] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:38.553 [2024-10-15 09:09:56.418756] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:38.553 [2024-10-15 09:09:56.419026] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:38.553 [2024-10-15 09:09:56.419186] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:38.553 [2024-10-15 09:09:56.419199] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:38.553 [2024-10-15 09:09:56.419358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.553 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.813 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.813 "name": "raid_bdev1", 00:10:38.813 "uuid": "dc15597e-f002-4ee3-8ff0-a9b2281bc42d", 00:10:38.813 "strip_size_kb": 64, 00:10:38.813 "state": "online", 00:10:38.813 "raid_level": "raid0", 00:10:38.813 "superblock": true, 00:10:38.813 "num_base_bdevs": 4, 00:10:38.813 "num_base_bdevs_discovered": 4, 00:10:38.813 "num_base_bdevs_operational": 4, 00:10:38.813 "base_bdevs_list": [ 00:10:38.813 { 00:10:38.813 "name": "pt1", 00:10:38.813 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:38.813 "is_configured": true, 00:10:38.813 "data_offset": 2048, 00:10:38.813 "data_size": 63488 00:10:38.813 }, 00:10:38.813 { 00:10:38.813 "name": "pt2", 00:10:38.813 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:38.813 "is_configured": true, 00:10:38.813 "data_offset": 2048, 00:10:38.813 "data_size": 63488 00:10:38.813 }, 00:10:38.813 { 00:10:38.813 "name": "pt3", 00:10:38.813 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:38.813 "is_configured": true, 00:10:38.813 "data_offset": 2048, 00:10:38.813 "data_size": 63488 00:10:38.813 }, 00:10:38.813 { 00:10:38.813 "name": "pt4", 00:10:38.813 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:38.813 "is_configured": true, 00:10:38.813 "data_offset": 2048, 00:10:38.813 "data_size": 63488 00:10:38.813 } 00:10:38.813 ] 00:10:38.813 }' 00:10:38.813 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.813 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.073 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:39.073 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:39.073 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:39.073 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:39.073 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:39.073 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:39.073 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:39.073 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.073 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:39.073 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.073 [2024-10-15 09:09:56.928098] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:39.073 09:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.333 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:39.333 "name": "raid_bdev1", 00:10:39.334 "aliases": [ 00:10:39.334 "dc15597e-f002-4ee3-8ff0-a9b2281bc42d" 00:10:39.334 ], 00:10:39.334 "product_name": "Raid Volume", 00:10:39.334 "block_size": 512, 00:10:39.334 "num_blocks": 253952, 00:10:39.334 "uuid": "dc15597e-f002-4ee3-8ff0-a9b2281bc42d", 00:10:39.334 "assigned_rate_limits": { 00:10:39.334 "rw_ios_per_sec": 0, 00:10:39.334 "rw_mbytes_per_sec": 0, 00:10:39.334 "r_mbytes_per_sec": 0, 00:10:39.334 "w_mbytes_per_sec": 0 00:10:39.334 }, 00:10:39.334 "claimed": false, 00:10:39.334 "zoned": false, 00:10:39.334 "supported_io_types": { 00:10:39.334 "read": true, 00:10:39.334 "write": true, 00:10:39.334 "unmap": true, 00:10:39.334 "flush": true, 00:10:39.334 "reset": true, 00:10:39.334 "nvme_admin": false, 00:10:39.334 "nvme_io": false, 00:10:39.334 "nvme_io_md": false, 00:10:39.334 "write_zeroes": true, 00:10:39.334 "zcopy": false, 00:10:39.334 "get_zone_info": false, 00:10:39.334 "zone_management": false, 00:10:39.334 "zone_append": false, 00:10:39.334 "compare": false, 00:10:39.334 "compare_and_write": false, 00:10:39.334 "abort": false, 00:10:39.334 "seek_hole": false, 00:10:39.334 "seek_data": false, 00:10:39.334 "copy": false, 00:10:39.334 "nvme_iov_md": false 00:10:39.334 }, 00:10:39.334 "memory_domains": [ 00:10:39.334 { 00:10:39.334 "dma_device_id": "system", 00:10:39.334 "dma_device_type": 1 00:10:39.334 }, 00:10:39.334 { 00:10:39.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.334 "dma_device_type": 2 00:10:39.334 }, 00:10:39.334 { 00:10:39.334 "dma_device_id": "system", 00:10:39.334 "dma_device_type": 1 00:10:39.334 }, 00:10:39.334 { 00:10:39.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.334 "dma_device_type": 2 00:10:39.334 }, 00:10:39.334 { 00:10:39.334 "dma_device_id": "system", 00:10:39.334 "dma_device_type": 1 00:10:39.334 }, 00:10:39.334 { 00:10:39.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.334 "dma_device_type": 2 00:10:39.334 }, 00:10:39.334 { 00:10:39.334 "dma_device_id": "system", 00:10:39.334 "dma_device_type": 1 00:10:39.334 }, 00:10:39.334 { 00:10:39.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.334 "dma_device_type": 2 00:10:39.334 } 00:10:39.334 ], 00:10:39.334 "driver_specific": { 00:10:39.334 "raid": { 00:10:39.334 "uuid": "dc15597e-f002-4ee3-8ff0-a9b2281bc42d", 00:10:39.334 "strip_size_kb": 64, 00:10:39.334 "state": "online", 00:10:39.334 "raid_level": "raid0", 00:10:39.334 "superblock": true, 00:10:39.334 "num_base_bdevs": 4, 00:10:39.334 "num_base_bdevs_discovered": 4, 00:10:39.334 "num_base_bdevs_operational": 4, 00:10:39.334 "base_bdevs_list": [ 00:10:39.334 { 00:10:39.334 "name": "pt1", 00:10:39.334 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:39.334 "is_configured": true, 00:10:39.334 "data_offset": 2048, 00:10:39.334 "data_size": 63488 00:10:39.334 }, 00:10:39.334 { 00:10:39.334 "name": "pt2", 00:10:39.334 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:39.334 "is_configured": true, 00:10:39.334 "data_offset": 2048, 00:10:39.334 "data_size": 63488 00:10:39.334 }, 00:10:39.334 { 00:10:39.334 "name": "pt3", 00:10:39.334 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:39.334 "is_configured": true, 00:10:39.334 "data_offset": 2048, 00:10:39.334 "data_size": 63488 00:10:39.334 }, 00:10:39.334 { 00:10:39.334 "name": "pt4", 00:10:39.334 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:39.334 "is_configured": true, 00:10:39.334 "data_offset": 2048, 00:10:39.334 "data_size": 63488 00:10:39.334 } 00:10:39.334 ] 00:10:39.334 } 00:10:39.334 } 00:10:39.334 }' 00:10:39.334 09:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:39.334 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:39.334 pt2 00:10:39.334 pt3 00:10:39.334 pt4' 00:10:39.334 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.334 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:39.334 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.334 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:39.334 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.334 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.334 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.334 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.334 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.334 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.334 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.334 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.334 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:39.334 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.334 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.334 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.334 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.334 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.334 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.334 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:39.334 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.334 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.334 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.334 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.334 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.334 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.334 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.334 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.334 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:39.334 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.334 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.334 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.595 [2024-10-15 09:09:57.251523] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=dc15597e-f002-4ee3-8ff0-a9b2281bc42d 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z dc15597e-f002-4ee3-8ff0-a9b2281bc42d ']' 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.595 [2024-10-15 09:09:57.295086] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:39.595 [2024-10-15 09:09:57.295129] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:39.595 [2024-10-15 09:09:57.295234] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:39.595 [2024-10-15 09:09:57.295312] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:39.595 [2024-10-15 09:09:57.295329] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.595 [2024-10-15 09:09:57.462884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:39.595 [2024-10-15 09:09:57.465089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:39.595 [2024-10-15 09:09:57.465151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:39.595 [2024-10-15 09:09:57.465192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:39.595 [2024-10-15 09:09:57.465251] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:39.595 [2024-10-15 09:09:57.465311] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:39.595 [2024-10-15 09:09:57.465334] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:39.595 [2024-10-15 09:09:57.465357] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:39.595 [2024-10-15 09:09:57.465373] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:39.595 [2024-10-15 09:09:57.465386] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:39.595 request: 00:10:39.595 { 00:10:39.595 "name": "raid_bdev1", 00:10:39.595 "raid_level": "raid0", 00:10:39.595 "base_bdevs": [ 00:10:39.595 "malloc1", 00:10:39.595 "malloc2", 00:10:39.595 "malloc3", 00:10:39.595 "malloc4" 00:10:39.595 ], 00:10:39.595 "strip_size_kb": 64, 00:10:39.595 "superblock": false, 00:10:39.595 "method": "bdev_raid_create", 00:10:39.595 "req_id": 1 00:10:39.595 } 00:10:39.595 Got JSON-RPC error response 00:10:39.595 response: 00:10:39.595 { 00:10:39.595 "code": -17, 00:10:39.595 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:39.595 } 00:10:39.595 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:39.596 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:39.596 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:39.596 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:39.596 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:39.596 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.596 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:39.596 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.596 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.596 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.855 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:39.855 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:39.855 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:39.855 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.855 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.855 [2024-10-15 09:09:57.526764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:39.855 [2024-10-15 09:09:57.526947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.855 [2024-10-15 09:09:57.526988] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:39.855 [2024-10-15 09:09:57.527023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.855 [2024-10-15 09:09:57.529566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.855 [2024-10-15 09:09:57.529671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:39.855 [2024-10-15 09:09:57.529821] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:39.855 [2024-10-15 09:09:57.529931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:39.855 pt1 00:10:39.855 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.855 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:39.855 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:39.855 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.855 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.855 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.855 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.855 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.855 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.855 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.855 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.855 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.855 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.855 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.855 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.855 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.855 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.855 "name": "raid_bdev1", 00:10:39.855 "uuid": "dc15597e-f002-4ee3-8ff0-a9b2281bc42d", 00:10:39.855 "strip_size_kb": 64, 00:10:39.855 "state": "configuring", 00:10:39.855 "raid_level": "raid0", 00:10:39.855 "superblock": true, 00:10:39.855 "num_base_bdevs": 4, 00:10:39.855 "num_base_bdevs_discovered": 1, 00:10:39.855 "num_base_bdevs_operational": 4, 00:10:39.855 "base_bdevs_list": [ 00:10:39.855 { 00:10:39.855 "name": "pt1", 00:10:39.855 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:39.855 "is_configured": true, 00:10:39.855 "data_offset": 2048, 00:10:39.855 "data_size": 63488 00:10:39.855 }, 00:10:39.855 { 00:10:39.855 "name": null, 00:10:39.855 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:39.855 "is_configured": false, 00:10:39.855 "data_offset": 2048, 00:10:39.855 "data_size": 63488 00:10:39.855 }, 00:10:39.855 { 00:10:39.855 "name": null, 00:10:39.855 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:39.855 "is_configured": false, 00:10:39.855 "data_offset": 2048, 00:10:39.855 "data_size": 63488 00:10:39.855 }, 00:10:39.855 { 00:10:39.855 "name": null, 00:10:39.855 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:39.855 "is_configured": false, 00:10:39.855 "data_offset": 2048, 00:10:39.855 "data_size": 63488 00:10:39.855 } 00:10:39.855 ] 00:10:39.855 }' 00:10:39.856 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.856 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.115 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:40.115 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:40.115 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.115 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.115 [2024-10-15 09:09:57.989951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:40.115 [2024-10-15 09:09:57.990060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.115 [2024-10-15 09:09:57.990086] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:40.115 [2024-10-15 09:09:57.990100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.115 [2024-10-15 09:09:57.990645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.115 [2024-10-15 09:09:57.990670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:40.115 [2024-10-15 09:09:57.990781] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:40.115 [2024-10-15 09:09:57.990810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:40.115 pt2 00:10:40.115 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.115 09:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:40.115 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.115 09:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.115 [2024-10-15 09:09:58.001984] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:40.115 09:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.115 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:40.115 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:40.115 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.115 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.115 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.115 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.115 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.115 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.115 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.115 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.374 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.374 09:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.374 09:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.374 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.374 09:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.374 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.374 "name": "raid_bdev1", 00:10:40.374 "uuid": "dc15597e-f002-4ee3-8ff0-a9b2281bc42d", 00:10:40.374 "strip_size_kb": 64, 00:10:40.374 "state": "configuring", 00:10:40.374 "raid_level": "raid0", 00:10:40.374 "superblock": true, 00:10:40.374 "num_base_bdevs": 4, 00:10:40.374 "num_base_bdevs_discovered": 1, 00:10:40.374 "num_base_bdevs_operational": 4, 00:10:40.374 "base_bdevs_list": [ 00:10:40.374 { 00:10:40.374 "name": "pt1", 00:10:40.374 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:40.374 "is_configured": true, 00:10:40.374 "data_offset": 2048, 00:10:40.374 "data_size": 63488 00:10:40.374 }, 00:10:40.374 { 00:10:40.374 "name": null, 00:10:40.374 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:40.374 "is_configured": false, 00:10:40.374 "data_offset": 0, 00:10:40.374 "data_size": 63488 00:10:40.374 }, 00:10:40.374 { 00:10:40.374 "name": null, 00:10:40.374 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:40.374 "is_configured": false, 00:10:40.374 "data_offset": 2048, 00:10:40.374 "data_size": 63488 00:10:40.374 }, 00:10:40.374 { 00:10:40.374 "name": null, 00:10:40.374 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:40.374 "is_configured": false, 00:10:40.374 "data_offset": 2048, 00:10:40.374 "data_size": 63488 00:10:40.374 } 00:10:40.374 ] 00:10:40.374 }' 00:10:40.374 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.374 09:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.633 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:40.633 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:40.633 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:40.633 09:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.633 09:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.633 [2024-10-15 09:09:58.481130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:40.633 [2024-10-15 09:09:58.481209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.633 [2024-10-15 09:09:58.481233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:40.633 [2024-10-15 09:09:58.481244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.633 [2024-10-15 09:09:58.481767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.633 [2024-10-15 09:09:58.481869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:40.633 [2024-10-15 09:09:58.481988] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:40.633 [2024-10-15 09:09:58.482014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:40.633 pt2 00:10:40.633 09:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.633 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:40.633 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:40.633 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:40.633 09:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.633 09:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.633 [2024-10-15 09:09:58.497082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:40.633 [2024-10-15 09:09:58.497203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.633 [2024-10-15 09:09:58.497267] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:40.633 [2024-10-15 09:09:58.497313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.633 [2024-10-15 09:09:58.497830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.633 [2024-10-15 09:09:58.497864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:40.634 [2024-10-15 09:09:58.497956] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:40.634 [2024-10-15 09:09:58.497980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:40.634 pt3 00:10:40.634 09:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.634 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:40.634 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:40.634 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:40.634 09:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.634 09:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.634 [2024-10-15 09:09:58.509028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:40.634 [2024-10-15 09:09:58.509093] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.634 [2024-10-15 09:09:58.509118] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:40.634 [2024-10-15 09:09:58.509130] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.634 [2024-10-15 09:09:58.509604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.634 [2024-10-15 09:09:58.509640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:40.634 [2024-10-15 09:09:58.509744] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:40.634 [2024-10-15 09:09:58.509770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:40.634 [2024-10-15 09:09:58.509952] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:40.634 [2024-10-15 09:09:58.509969] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:40.634 [2024-10-15 09:09:58.510313] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:40.634 [2024-10-15 09:09:58.510480] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:40.634 [2024-10-15 09:09:58.510494] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:40.634 [2024-10-15 09:09:58.510636] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.634 pt4 00:10:40.634 09:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.634 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:40.634 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:40.634 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:40.634 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:40.634 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.634 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.634 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.634 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.634 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.634 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.634 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.634 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.634 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.634 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.634 09:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.634 09:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.893 09:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.893 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.893 "name": "raid_bdev1", 00:10:40.893 "uuid": "dc15597e-f002-4ee3-8ff0-a9b2281bc42d", 00:10:40.893 "strip_size_kb": 64, 00:10:40.893 "state": "online", 00:10:40.893 "raid_level": "raid0", 00:10:40.893 "superblock": true, 00:10:40.893 "num_base_bdevs": 4, 00:10:40.893 "num_base_bdevs_discovered": 4, 00:10:40.893 "num_base_bdevs_operational": 4, 00:10:40.893 "base_bdevs_list": [ 00:10:40.893 { 00:10:40.893 "name": "pt1", 00:10:40.893 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:40.893 "is_configured": true, 00:10:40.893 "data_offset": 2048, 00:10:40.893 "data_size": 63488 00:10:40.893 }, 00:10:40.893 { 00:10:40.893 "name": "pt2", 00:10:40.893 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:40.893 "is_configured": true, 00:10:40.893 "data_offset": 2048, 00:10:40.893 "data_size": 63488 00:10:40.893 }, 00:10:40.893 { 00:10:40.893 "name": "pt3", 00:10:40.893 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:40.893 "is_configured": true, 00:10:40.893 "data_offset": 2048, 00:10:40.893 "data_size": 63488 00:10:40.893 }, 00:10:40.893 { 00:10:40.893 "name": "pt4", 00:10:40.893 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:40.893 "is_configured": true, 00:10:40.893 "data_offset": 2048, 00:10:40.893 "data_size": 63488 00:10:40.893 } 00:10:40.893 ] 00:10:40.893 }' 00:10:40.893 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.893 09:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.151 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:41.151 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:41.151 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:41.151 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:41.151 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:41.151 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:41.151 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:41.151 09:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.151 09:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.151 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:41.151 [2024-10-15 09:09:58.948757] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:41.151 09:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.151 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:41.151 "name": "raid_bdev1", 00:10:41.151 "aliases": [ 00:10:41.151 "dc15597e-f002-4ee3-8ff0-a9b2281bc42d" 00:10:41.151 ], 00:10:41.151 "product_name": "Raid Volume", 00:10:41.151 "block_size": 512, 00:10:41.151 "num_blocks": 253952, 00:10:41.151 "uuid": "dc15597e-f002-4ee3-8ff0-a9b2281bc42d", 00:10:41.151 "assigned_rate_limits": { 00:10:41.151 "rw_ios_per_sec": 0, 00:10:41.151 "rw_mbytes_per_sec": 0, 00:10:41.151 "r_mbytes_per_sec": 0, 00:10:41.151 "w_mbytes_per_sec": 0 00:10:41.151 }, 00:10:41.151 "claimed": false, 00:10:41.151 "zoned": false, 00:10:41.151 "supported_io_types": { 00:10:41.151 "read": true, 00:10:41.151 "write": true, 00:10:41.151 "unmap": true, 00:10:41.151 "flush": true, 00:10:41.151 "reset": true, 00:10:41.151 "nvme_admin": false, 00:10:41.151 "nvme_io": false, 00:10:41.151 "nvme_io_md": false, 00:10:41.151 "write_zeroes": true, 00:10:41.151 "zcopy": false, 00:10:41.151 "get_zone_info": false, 00:10:41.152 "zone_management": false, 00:10:41.152 "zone_append": false, 00:10:41.152 "compare": false, 00:10:41.152 "compare_and_write": false, 00:10:41.152 "abort": false, 00:10:41.152 "seek_hole": false, 00:10:41.152 "seek_data": false, 00:10:41.152 "copy": false, 00:10:41.152 "nvme_iov_md": false 00:10:41.152 }, 00:10:41.152 "memory_domains": [ 00:10:41.152 { 00:10:41.152 "dma_device_id": "system", 00:10:41.152 "dma_device_type": 1 00:10:41.152 }, 00:10:41.152 { 00:10:41.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.152 "dma_device_type": 2 00:10:41.152 }, 00:10:41.152 { 00:10:41.152 "dma_device_id": "system", 00:10:41.152 "dma_device_type": 1 00:10:41.152 }, 00:10:41.152 { 00:10:41.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.152 "dma_device_type": 2 00:10:41.152 }, 00:10:41.152 { 00:10:41.152 "dma_device_id": "system", 00:10:41.152 "dma_device_type": 1 00:10:41.152 }, 00:10:41.152 { 00:10:41.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.152 "dma_device_type": 2 00:10:41.152 }, 00:10:41.152 { 00:10:41.152 "dma_device_id": "system", 00:10:41.152 "dma_device_type": 1 00:10:41.152 }, 00:10:41.152 { 00:10:41.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.152 "dma_device_type": 2 00:10:41.152 } 00:10:41.152 ], 00:10:41.152 "driver_specific": { 00:10:41.152 "raid": { 00:10:41.152 "uuid": "dc15597e-f002-4ee3-8ff0-a9b2281bc42d", 00:10:41.152 "strip_size_kb": 64, 00:10:41.152 "state": "online", 00:10:41.152 "raid_level": "raid0", 00:10:41.152 "superblock": true, 00:10:41.152 "num_base_bdevs": 4, 00:10:41.152 "num_base_bdevs_discovered": 4, 00:10:41.152 "num_base_bdevs_operational": 4, 00:10:41.152 "base_bdevs_list": [ 00:10:41.152 { 00:10:41.152 "name": "pt1", 00:10:41.152 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:41.152 "is_configured": true, 00:10:41.152 "data_offset": 2048, 00:10:41.152 "data_size": 63488 00:10:41.152 }, 00:10:41.152 { 00:10:41.152 "name": "pt2", 00:10:41.152 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:41.152 "is_configured": true, 00:10:41.152 "data_offset": 2048, 00:10:41.152 "data_size": 63488 00:10:41.152 }, 00:10:41.152 { 00:10:41.152 "name": "pt3", 00:10:41.152 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:41.152 "is_configured": true, 00:10:41.152 "data_offset": 2048, 00:10:41.152 "data_size": 63488 00:10:41.152 }, 00:10:41.152 { 00:10:41.152 "name": "pt4", 00:10:41.152 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:41.152 "is_configured": true, 00:10:41.152 "data_offset": 2048, 00:10:41.152 "data_size": 63488 00:10:41.152 } 00:10:41.152 ] 00:10:41.152 } 00:10:41.152 } 00:10:41.152 }' 00:10:41.152 09:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:41.152 09:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:41.152 pt2 00:10:41.152 pt3 00:10:41.152 pt4' 00:10:41.152 09:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.411 [2024-10-15 09:09:59.280177] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:41.411 09:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.669 09:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' dc15597e-f002-4ee3-8ff0-a9b2281bc42d '!=' dc15597e-f002-4ee3-8ff0-a9b2281bc42d ']' 00:10:41.669 09:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:41.669 09:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:41.669 09:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:41.669 09:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70817 00:10:41.669 09:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 70817 ']' 00:10:41.669 09:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 70817 00:10:41.669 09:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:41.669 09:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:41.669 09:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70817 00:10:41.669 09:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:41.669 09:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:41.669 09:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70817' 00:10:41.669 killing process with pid 70817 00:10:41.669 09:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 70817 00:10:41.669 [2024-10-15 09:09:59.353717] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:41.669 [2024-10-15 09:09:59.353851] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:41.669 09:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 70817 00:10:41.669 [2024-10-15 09:09:59.353941] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:41.669 [2024-10-15 09:09:59.353953] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:42.236 [2024-10-15 09:09:59.824961] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:43.618 09:10:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:43.618 00:10:43.618 real 0m5.928s 00:10:43.618 user 0m8.396s 00:10:43.618 sys 0m0.977s 00:10:43.618 09:10:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:43.618 ************************************ 00:10:43.618 END TEST raid_superblock_test 00:10:43.618 ************************************ 00:10:43.618 09:10:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.618 09:10:01 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:43.618 09:10:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:43.618 09:10:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:43.618 09:10:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:43.618 ************************************ 00:10:43.618 START TEST raid_read_error_test 00:10:43.618 ************************************ 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9O14j7R5Fh 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71082 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71082 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 71082 ']' 00:10:43.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:43.618 09:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.618 [2024-10-15 09:10:01.325637] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:10:43.618 [2024-10-15 09:10:01.325804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71082 ] 00:10:43.618 [2024-10-15 09:10:01.497950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.877 [2024-10-15 09:10:01.628902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.136 [2024-10-15 09:10:01.862876] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:44.136 [2024-10-15 09:10:01.862936] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:44.396 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:44.396 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:44.396 09:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:44.396 09:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:44.396 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.396 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.396 BaseBdev1_malloc 00:10:44.396 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.396 09:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:44.396 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.396 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.396 true 00:10:44.396 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.396 09:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:44.396 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.396 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.396 [2024-10-15 09:10:02.247017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:44.396 [2024-10-15 09:10:02.247195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.396 [2024-10-15 09:10:02.247240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:44.396 [2024-10-15 09:10:02.247278] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.396 [2024-10-15 09:10:02.249638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.396 [2024-10-15 09:10:02.249743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:44.396 BaseBdev1 00:10:44.396 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.396 09:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:44.396 09:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:44.396 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.396 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.656 BaseBdev2_malloc 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.656 true 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.656 [2024-10-15 09:10:02.320429] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:44.656 [2024-10-15 09:10:02.320513] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.656 [2024-10-15 09:10:02.320535] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:44.656 [2024-10-15 09:10:02.320548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.656 [2024-10-15 09:10:02.323108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.656 [2024-10-15 09:10:02.323178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:44.656 BaseBdev2 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.656 BaseBdev3_malloc 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.656 true 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.656 [2024-10-15 09:10:02.404720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:44.656 [2024-10-15 09:10:02.404807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.656 [2024-10-15 09:10:02.404829] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:44.656 [2024-10-15 09:10:02.404843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.656 [2024-10-15 09:10:02.407312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.656 [2024-10-15 09:10:02.407360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:44.656 BaseBdev3 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.656 BaseBdev4_malloc 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.656 true 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.656 [2024-10-15 09:10:02.474851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:44.656 [2024-10-15 09:10:02.474936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.656 [2024-10-15 09:10:02.474961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:44.656 [2024-10-15 09:10:02.474973] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.656 [2024-10-15 09:10:02.477376] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.656 [2024-10-15 09:10:02.477424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:44.656 BaseBdev4 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.656 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.657 [2024-10-15 09:10:02.486904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:44.657 [2024-10-15 09:10:02.488914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:44.657 [2024-10-15 09:10:02.489006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:44.657 [2024-10-15 09:10:02.489080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:44.657 [2024-10-15 09:10:02.489325] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:44.657 [2024-10-15 09:10:02.489343] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:44.657 [2024-10-15 09:10:02.489631] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:44.657 [2024-10-15 09:10:02.489839] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:44.657 [2024-10-15 09:10:02.489850] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:44.657 [2024-10-15 09:10:02.490036] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.657 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.657 09:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:44.657 09:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:44.657 09:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.657 09:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.657 09:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.657 09:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.657 09:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.657 09:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.657 09:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.657 09:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.657 09:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.657 09:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.657 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.657 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.657 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.657 09:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.657 "name": "raid_bdev1", 00:10:44.657 "uuid": "4e49f23b-9e26-418c-bbb4-62239f2ae30f", 00:10:44.657 "strip_size_kb": 64, 00:10:44.657 "state": "online", 00:10:44.657 "raid_level": "raid0", 00:10:44.657 "superblock": true, 00:10:44.657 "num_base_bdevs": 4, 00:10:44.657 "num_base_bdevs_discovered": 4, 00:10:44.657 "num_base_bdevs_operational": 4, 00:10:44.657 "base_bdevs_list": [ 00:10:44.657 { 00:10:44.657 "name": "BaseBdev1", 00:10:44.657 "uuid": "291a88e8-5ac7-54d0-84a9-d0fff46470d8", 00:10:44.657 "is_configured": true, 00:10:44.657 "data_offset": 2048, 00:10:44.657 "data_size": 63488 00:10:44.657 }, 00:10:44.657 { 00:10:44.657 "name": "BaseBdev2", 00:10:44.657 "uuid": "3d7a440f-f5c1-5ad3-a20f-aff0bba549d5", 00:10:44.657 "is_configured": true, 00:10:44.657 "data_offset": 2048, 00:10:44.657 "data_size": 63488 00:10:44.657 }, 00:10:44.657 { 00:10:44.657 "name": "BaseBdev3", 00:10:44.657 "uuid": "6d4e2735-d5b7-550d-87fb-fc86722c7ed2", 00:10:44.657 "is_configured": true, 00:10:44.657 "data_offset": 2048, 00:10:44.657 "data_size": 63488 00:10:44.657 }, 00:10:44.657 { 00:10:44.657 "name": "BaseBdev4", 00:10:44.657 "uuid": "6f38e074-7792-56e8-a368-50708c2fd044", 00:10:44.657 "is_configured": true, 00:10:44.657 "data_offset": 2048, 00:10:44.657 "data_size": 63488 00:10:44.657 } 00:10:44.657 ] 00:10:44.657 }' 00:10:44.657 09:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.657 09:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.225 09:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:45.225 09:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:45.225 [2024-10-15 09:10:03.035823] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:46.163 09:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:46.163 09:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.163 09:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.163 09:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.163 09:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:46.163 09:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:46.163 09:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:46.163 09:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:46.163 09:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:46.163 09:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:46.163 09:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.163 09:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.163 09:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.163 09:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.163 09:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.163 09:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.163 09:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.163 09:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.163 09:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.163 09:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.163 09:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.163 09:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.163 09:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.163 "name": "raid_bdev1", 00:10:46.163 "uuid": "4e49f23b-9e26-418c-bbb4-62239f2ae30f", 00:10:46.163 "strip_size_kb": 64, 00:10:46.163 "state": "online", 00:10:46.163 "raid_level": "raid0", 00:10:46.163 "superblock": true, 00:10:46.163 "num_base_bdevs": 4, 00:10:46.163 "num_base_bdevs_discovered": 4, 00:10:46.163 "num_base_bdevs_operational": 4, 00:10:46.163 "base_bdevs_list": [ 00:10:46.163 { 00:10:46.163 "name": "BaseBdev1", 00:10:46.163 "uuid": "291a88e8-5ac7-54d0-84a9-d0fff46470d8", 00:10:46.163 "is_configured": true, 00:10:46.163 "data_offset": 2048, 00:10:46.163 "data_size": 63488 00:10:46.163 }, 00:10:46.163 { 00:10:46.163 "name": "BaseBdev2", 00:10:46.163 "uuid": "3d7a440f-f5c1-5ad3-a20f-aff0bba549d5", 00:10:46.163 "is_configured": true, 00:10:46.163 "data_offset": 2048, 00:10:46.163 "data_size": 63488 00:10:46.163 }, 00:10:46.163 { 00:10:46.163 "name": "BaseBdev3", 00:10:46.163 "uuid": "6d4e2735-d5b7-550d-87fb-fc86722c7ed2", 00:10:46.163 "is_configured": true, 00:10:46.163 "data_offset": 2048, 00:10:46.163 "data_size": 63488 00:10:46.163 }, 00:10:46.163 { 00:10:46.163 "name": "BaseBdev4", 00:10:46.163 "uuid": "6f38e074-7792-56e8-a368-50708c2fd044", 00:10:46.163 "is_configured": true, 00:10:46.163 "data_offset": 2048, 00:10:46.163 "data_size": 63488 00:10:46.163 } 00:10:46.163 ] 00:10:46.163 }' 00:10:46.163 09:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.163 09:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.731 09:10:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:46.731 09:10:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.731 09:10:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.731 [2024-10-15 09:10:04.425046] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:46.731 [2024-10-15 09:10:04.425198] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:46.731 [2024-10-15 09:10:04.428235] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.731 [2024-10-15 09:10:04.428353] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:46.731 [2024-10-15 09:10:04.428424] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:46.731 [2024-10-15 09:10:04.428482] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:46.731 { 00:10:46.731 "results": [ 00:10:46.731 { 00:10:46.731 "job": "raid_bdev1", 00:10:46.731 "core_mask": "0x1", 00:10:46.731 "workload": "randrw", 00:10:46.731 "percentage": 50, 00:10:46.731 "status": "finished", 00:10:46.731 "queue_depth": 1, 00:10:46.731 "io_size": 131072, 00:10:46.731 "runtime": 1.389887, 00:10:46.731 "iops": 13900.410608920005, 00:10:46.731 "mibps": 1737.5513261150006, 00:10:46.731 "io_failed": 1, 00:10:46.731 "io_timeout": 0, 00:10:46.731 "avg_latency_us": 99.96917081646801, 00:10:46.731 "min_latency_us": 29.065502183406114, 00:10:46.731 "max_latency_us": 1645.5545851528384 00:10:46.731 } 00:10:46.731 ], 00:10:46.731 "core_count": 1 00:10:46.731 } 00:10:46.731 09:10:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.731 09:10:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71082 00:10:46.731 09:10:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 71082 ']' 00:10:46.731 09:10:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 71082 00:10:46.731 09:10:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:46.731 09:10:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:46.731 09:10:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71082 00:10:46.731 killing process with pid 71082 00:10:46.731 09:10:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:46.731 09:10:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:46.731 09:10:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71082' 00:10:46.731 09:10:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 71082 00:10:46.731 [2024-10-15 09:10:04.476410] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:46.731 09:10:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 71082 00:10:46.991 [2024-10-15 09:10:04.846800] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:48.368 09:10:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:48.368 09:10:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9O14j7R5Fh 00:10:48.368 09:10:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:48.368 09:10:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:48.368 09:10:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:48.368 09:10:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:48.368 09:10:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:48.368 09:10:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:48.368 00:10:48.368 real 0m5.036s 00:10:48.368 user 0m5.888s 00:10:48.368 sys 0m0.644s 00:10:48.368 09:10:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:48.368 09:10:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.368 ************************************ 00:10:48.368 END TEST raid_read_error_test 00:10:48.368 ************************************ 00:10:48.627 09:10:06 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:48.627 09:10:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:48.627 09:10:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:48.627 09:10:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:48.627 ************************************ 00:10:48.627 START TEST raid_write_error_test 00:10:48.627 ************************************ 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.L6X2ejsmy5 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71236 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71236 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 71236 ']' 00:10:48.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:48.627 09:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.627 [2024-10-15 09:10:06.436109] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:10:48.627 [2024-10-15 09:10:06.436300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71236 ] 00:10:48.886 [2024-10-15 09:10:06.620979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.886 [2024-10-15 09:10:06.756469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.145 [2024-10-15 09:10:06.985959] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.145 [2024-10-15 09:10:06.986050] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.445 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:49.445 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:49.445 09:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:49.445 09:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:49.445 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.445 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.445 BaseBdev1_malloc 00:10:49.445 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.445 09:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:49.445 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.705 true 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.705 [2024-10-15 09:10:07.355424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:49.705 [2024-10-15 09:10:07.355579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.705 [2024-10-15 09:10:07.355616] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:49.705 [2024-10-15 09:10:07.355647] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.705 [2024-10-15 09:10:07.357950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.705 [2024-10-15 09:10:07.358032] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:49.705 BaseBdev1 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.705 BaseBdev2_malloc 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.705 true 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.705 [2024-10-15 09:10:07.421622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:49.705 [2024-10-15 09:10:07.421796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.705 [2024-10-15 09:10:07.421834] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:49.705 [2024-10-15 09:10:07.421866] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.705 [2024-10-15 09:10:07.423962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.705 [2024-10-15 09:10:07.424050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:49.705 BaseBdev2 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.705 BaseBdev3_malloc 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.705 true 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.705 [2024-10-15 09:10:07.502058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:49.705 [2024-10-15 09:10:07.502146] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.705 [2024-10-15 09:10:07.502170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:49.705 [2024-10-15 09:10:07.502180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.705 [2024-10-15 09:10:07.504473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.705 [2024-10-15 09:10:07.504608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:49.705 BaseBdev3 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.705 BaseBdev4_malloc 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.705 true 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.705 [2024-10-15 09:10:07.568590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:49.705 [2024-10-15 09:10:07.568698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.705 [2024-10-15 09:10:07.568725] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:49.705 [2024-10-15 09:10:07.568737] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.705 [2024-10-15 09:10:07.570976] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.705 [2024-10-15 09:10:07.571027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:49.705 BaseBdev4 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.705 [2024-10-15 09:10:07.580660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:49.705 [2024-10-15 09:10:07.582567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:49.705 [2024-10-15 09:10:07.582660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:49.705 [2024-10-15 09:10:07.582742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:49.705 [2024-10-15 09:10:07.582973] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:49.705 [2024-10-15 09:10:07.583000] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:49.705 [2024-10-15 09:10:07.583283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:49.705 [2024-10-15 09:10:07.583449] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:49.705 [2024-10-15 09:10:07.583459] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:49.705 [2024-10-15 09:10:07.583654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.705 09:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.706 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.706 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.965 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.965 09:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.965 "name": "raid_bdev1", 00:10:49.965 "uuid": "56c78f3b-accb-4856-a284-e0ff90623504", 00:10:49.965 "strip_size_kb": 64, 00:10:49.965 "state": "online", 00:10:49.965 "raid_level": "raid0", 00:10:49.965 "superblock": true, 00:10:49.965 "num_base_bdevs": 4, 00:10:49.965 "num_base_bdevs_discovered": 4, 00:10:49.965 "num_base_bdevs_operational": 4, 00:10:49.965 "base_bdevs_list": [ 00:10:49.965 { 00:10:49.965 "name": "BaseBdev1", 00:10:49.965 "uuid": "ff9dbdff-648a-5b04-aba5-a1534f9f1910", 00:10:49.965 "is_configured": true, 00:10:49.965 "data_offset": 2048, 00:10:49.965 "data_size": 63488 00:10:49.965 }, 00:10:49.965 { 00:10:49.965 "name": "BaseBdev2", 00:10:49.965 "uuid": "43d61d99-900a-508f-b2bc-d3b28cb648c1", 00:10:49.965 "is_configured": true, 00:10:49.965 "data_offset": 2048, 00:10:49.965 "data_size": 63488 00:10:49.965 }, 00:10:49.965 { 00:10:49.965 "name": "BaseBdev3", 00:10:49.965 "uuid": "f0f1df9b-149e-51d5-9922-b836a43bbda7", 00:10:49.965 "is_configured": true, 00:10:49.965 "data_offset": 2048, 00:10:49.965 "data_size": 63488 00:10:49.965 }, 00:10:49.965 { 00:10:49.965 "name": "BaseBdev4", 00:10:49.965 "uuid": "42b02856-a75e-556f-b177-241af21b224c", 00:10:49.965 "is_configured": true, 00:10:49.965 "data_offset": 2048, 00:10:49.965 "data_size": 63488 00:10:49.965 } 00:10:49.965 ] 00:10:49.965 }' 00:10:49.965 09:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.965 09:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.225 09:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:50.225 09:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:50.484 [2024-10-15 09:10:08.168960] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:51.419 09:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:51.419 09:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.419 09:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.419 09:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.419 09:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:51.419 09:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:51.419 09:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:51.419 09:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:51.419 09:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:51.419 09:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.419 09:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.419 09:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.419 09:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.419 09:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.419 09:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.419 09:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.419 09:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.419 09:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.419 09:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.419 09:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.419 09:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.419 09:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.419 09:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.419 "name": "raid_bdev1", 00:10:51.419 "uuid": "56c78f3b-accb-4856-a284-e0ff90623504", 00:10:51.419 "strip_size_kb": 64, 00:10:51.419 "state": "online", 00:10:51.419 "raid_level": "raid0", 00:10:51.419 "superblock": true, 00:10:51.419 "num_base_bdevs": 4, 00:10:51.419 "num_base_bdevs_discovered": 4, 00:10:51.419 "num_base_bdevs_operational": 4, 00:10:51.419 "base_bdevs_list": [ 00:10:51.419 { 00:10:51.419 "name": "BaseBdev1", 00:10:51.419 "uuid": "ff9dbdff-648a-5b04-aba5-a1534f9f1910", 00:10:51.419 "is_configured": true, 00:10:51.419 "data_offset": 2048, 00:10:51.419 "data_size": 63488 00:10:51.419 }, 00:10:51.419 { 00:10:51.419 "name": "BaseBdev2", 00:10:51.419 "uuid": "43d61d99-900a-508f-b2bc-d3b28cb648c1", 00:10:51.419 "is_configured": true, 00:10:51.419 "data_offset": 2048, 00:10:51.419 "data_size": 63488 00:10:51.419 }, 00:10:51.419 { 00:10:51.419 "name": "BaseBdev3", 00:10:51.419 "uuid": "f0f1df9b-149e-51d5-9922-b836a43bbda7", 00:10:51.419 "is_configured": true, 00:10:51.419 "data_offset": 2048, 00:10:51.419 "data_size": 63488 00:10:51.419 }, 00:10:51.419 { 00:10:51.420 "name": "BaseBdev4", 00:10:51.420 "uuid": "42b02856-a75e-556f-b177-241af21b224c", 00:10:51.420 "is_configured": true, 00:10:51.420 "data_offset": 2048, 00:10:51.420 "data_size": 63488 00:10:51.420 } 00:10:51.420 ] 00:10:51.420 }' 00:10:51.420 09:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.420 09:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.984 09:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:51.984 09:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.984 09:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.984 [2024-10-15 09:10:09.613905] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:51.984 [2024-10-15 09:10:09.613960] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:51.984 [2024-10-15 09:10:09.616670] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:51.984 [2024-10-15 09:10:09.616758] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:51.984 [2024-10-15 09:10:09.616826] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:51.984 [2024-10-15 09:10:09.616840] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:51.984 { 00:10:51.984 "results": [ 00:10:51.984 { 00:10:51.984 "job": "raid_bdev1", 00:10:51.984 "core_mask": "0x1", 00:10:51.984 "workload": "randrw", 00:10:51.984 "percentage": 50, 00:10:51.984 "status": "finished", 00:10:51.984 "queue_depth": 1, 00:10:51.984 "io_size": 131072, 00:10:51.984 "runtime": 1.445915, 00:10:51.984 "iops": 14320.343865303286, 00:10:51.984 "mibps": 1790.0429831629108, 00:10:51.984 "io_failed": 1, 00:10:51.984 "io_timeout": 0, 00:10:51.984 "avg_latency_us": 97.22226709403375, 00:10:51.984 "min_latency_us": 25.9353711790393, 00:10:51.984 "max_latency_us": 1645.5545851528384 00:10:51.984 } 00:10:51.984 ], 00:10:51.984 "core_count": 1 00:10:51.984 } 00:10:51.984 09:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.984 09:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71236 00:10:51.984 09:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 71236 ']' 00:10:51.984 09:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 71236 00:10:51.984 09:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:51.984 09:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:51.984 09:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71236 00:10:51.984 killing process with pid 71236 00:10:51.984 09:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:51.984 09:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:51.984 09:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71236' 00:10:51.984 09:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 71236 00:10:51.984 09:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 71236 00:10:51.984 [2024-10-15 09:10:09.653826] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:52.241 [2024-10-15 09:10:10.023728] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:53.611 09:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.L6X2ejsmy5 00:10:53.611 09:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:53.611 09:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:53.611 ************************************ 00:10:53.611 END TEST raid_write_error_test 00:10:53.611 ************************************ 00:10:53.611 09:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:10:53.611 09:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:53.611 09:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:53.611 09:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:53.611 09:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:10:53.611 00:10:53.611 real 0m5.076s 00:10:53.611 user 0m6.024s 00:10:53.611 sys 0m0.626s 00:10:53.611 09:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:53.611 09:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.611 09:10:11 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:53.611 09:10:11 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:53.611 09:10:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:53.611 09:10:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:53.611 09:10:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:53.611 ************************************ 00:10:53.611 START TEST raid_state_function_test 00:10:53.611 ************************************ 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:53.611 Process raid pid: 71380 00:10:53.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71380 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71380' 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71380 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 71380 ']' 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.611 09:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:53.868 [2024-10-15 09:10:11.541122] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:10:53.868 [2024-10-15 09:10:11.541263] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.868 [2024-10-15 09:10:11.696040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.125 [2024-10-15 09:10:11.831667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.382 [2024-10-15 09:10:12.064155] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.382 [2024-10-15 09:10:12.064312] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.640 09:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:54.640 09:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:54.640 09:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:54.640 09:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.640 09:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.640 [2024-10-15 09:10:12.484867] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:54.640 [2024-10-15 09:10:12.485040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:54.640 [2024-10-15 09:10:12.485076] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:54.640 [2024-10-15 09:10:12.485105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:54.640 [2024-10-15 09:10:12.485126] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:54.640 [2024-10-15 09:10:12.485150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:54.640 [2024-10-15 09:10:12.485170] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:54.640 [2024-10-15 09:10:12.485211] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:54.640 09:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.640 09:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:54.640 09:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.640 09:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.640 09:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.640 09:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.640 09:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.640 09:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.640 09:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.640 09:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.640 09:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.640 09:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.640 09:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.640 09:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.640 09:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.640 09:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.640 09:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.640 "name": "Existed_Raid", 00:10:54.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.640 "strip_size_kb": 64, 00:10:54.640 "state": "configuring", 00:10:54.640 "raid_level": "concat", 00:10:54.640 "superblock": false, 00:10:54.640 "num_base_bdevs": 4, 00:10:54.640 "num_base_bdevs_discovered": 0, 00:10:54.640 "num_base_bdevs_operational": 4, 00:10:54.640 "base_bdevs_list": [ 00:10:54.640 { 00:10:54.640 "name": "BaseBdev1", 00:10:54.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.640 "is_configured": false, 00:10:54.640 "data_offset": 0, 00:10:54.640 "data_size": 0 00:10:54.640 }, 00:10:54.640 { 00:10:54.640 "name": "BaseBdev2", 00:10:54.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.640 "is_configured": false, 00:10:54.640 "data_offset": 0, 00:10:54.640 "data_size": 0 00:10:54.640 }, 00:10:54.640 { 00:10:54.640 "name": "BaseBdev3", 00:10:54.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.640 "is_configured": false, 00:10:54.640 "data_offset": 0, 00:10:54.640 "data_size": 0 00:10:54.640 }, 00:10:54.640 { 00:10:54.640 "name": "BaseBdev4", 00:10:54.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.640 "is_configured": false, 00:10:54.640 "data_offset": 0, 00:10:54.640 "data_size": 0 00:10:54.640 } 00:10:54.640 ] 00:10:54.640 }' 00:10:54.640 09:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.640 09:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.205 09:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:55.205 09:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.205 09:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.205 [2024-10-15 09:10:12.916052] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:55.205 [2024-10-15 09:10:12.916119] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:55.205 09:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.205 09:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:55.205 09:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.205 09:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.205 [2024-10-15 09:10:12.924031] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:55.205 [2024-10-15 09:10:12.924089] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:55.205 [2024-10-15 09:10:12.924099] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:55.205 [2024-10-15 09:10:12.924111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:55.205 [2024-10-15 09:10:12.924119] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:55.205 [2024-10-15 09:10:12.924129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:55.205 [2024-10-15 09:10:12.924137] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:55.205 [2024-10-15 09:10:12.924147] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:55.205 09:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.205 09:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:55.205 09:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.205 09:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.205 [2024-10-15 09:10:12.974662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:55.205 BaseBdev1 00:10:55.205 09:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.205 09:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:55.205 09:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:55.205 09:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:55.205 09:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:55.205 09:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:55.205 09:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:55.205 09:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:55.205 09:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.205 09:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.206 09:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.206 09:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:55.206 09:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.206 09:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.206 [ 00:10:55.206 { 00:10:55.206 "name": "BaseBdev1", 00:10:55.206 "aliases": [ 00:10:55.206 "94f6493e-a219-48ca-a4be-cfa17fe577cb" 00:10:55.206 ], 00:10:55.206 "product_name": "Malloc disk", 00:10:55.206 "block_size": 512, 00:10:55.206 "num_blocks": 65536, 00:10:55.206 "uuid": "94f6493e-a219-48ca-a4be-cfa17fe577cb", 00:10:55.206 "assigned_rate_limits": { 00:10:55.206 "rw_ios_per_sec": 0, 00:10:55.206 "rw_mbytes_per_sec": 0, 00:10:55.206 "r_mbytes_per_sec": 0, 00:10:55.206 "w_mbytes_per_sec": 0 00:10:55.206 }, 00:10:55.206 "claimed": true, 00:10:55.206 "claim_type": "exclusive_write", 00:10:55.206 "zoned": false, 00:10:55.206 "supported_io_types": { 00:10:55.206 "read": true, 00:10:55.206 "write": true, 00:10:55.206 "unmap": true, 00:10:55.206 "flush": true, 00:10:55.206 "reset": true, 00:10:55.206 "nvme_admin": false, 00:10:55.206 "nvme_io": false, 00:10:55.206 "nvme_io_md": false, 00:10:55.206 "write_zeroes": true, 00:10:55.206 "zcopy": true, 00:10:55.206 "get_zone_info": false, 00:10:55.206 "zone_management": false, 00:10:55.206 "zone_append": false, 00:10:55.206 "compare": false, 00:10:55.206 "compare_and_write": false, 00:10:55.206 "abort": true, 00:10:55.206 "seek_hole": false, 00:10:55.206 "seek_data": false, 00:10:55.206 "copy": true, 00:10:55.206 "nvme_iov_md": false 00:10:55.206 }, 00:10:55.206 "memory_domains": [ 00:10:55.206 { 00:10:55.206 "dma_device_id": "system", 00:10:55.206 "dma_device_type": 1 00:10:55.206 }, 00:10:55.206 { 00:10:55.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.206 "dma_device_type": 2 00:10:55.206 } 00:10:55.206 ], 00:10:55.206 "driver_specific": {} 00:10:55.206 } 00:10:55.206 ] 00:10:55.206 09:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.206 09:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:55.206 09:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:55.206 09:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.206 09:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.206 09:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.206 09:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.206 09:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.206 09:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.206 09:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.206 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.206 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.206 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.206 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.206 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.206 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.206 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.206 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.206 "name": "Existed_Raid", 00:10:55.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.206 "strip_size_kb": 64, 00:10:55.206 "state": "configuring", 00:10:55.206 "raid_level": "concat", 00:10:55.206 "superblock": false, 00:10:55.206 "num_base_bdevs": 4, 00:10:55.206 "num_base_bdevs_discovered": 1, 00:10:55.206 "num_base_bdevs_operational": 4, 00:10:55.206 "base_bdevs_list": [ 00:10:55.206 { 00:10:55.206 "name": "BaseBdev1", 00:10:55.206 "uuid": "94f6493e-a219-48ca-a4be-cfa17fe577cb", 00:10:55.206 "is_configured": true, 00:10:55.206 "data_offset": 0, 00:10:55.206 "data_size": 65536 00:10:55.206 }, 00:10:55.206 { 00:10:55.206 "name": "BaseBdev2", 00:10:55.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.206 "is_configured": false, 00:10:55.206 "data_offset": 0, 00:10:55.206 "data_size": 0 00:10:55.206 }, 00:10:55.206 { 00:10:55.206 "name": "BaseBdev3", 00:10:55.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.206 "is_configured": false, 00:10:55.206 "data_offset": 0, 00:10:55.206 "data_size": 0 00:10:55.206 }, 00:10:55.206 { 00:10:55.206 "name": "BaseBdev4", 00:10:55.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.206 "is_configured": false, 00:10:55.206 "data_offset": 0, 00:10:55.206 "data_size": 0 00:10:55.206 } 00:10:55.206 ] 00:10:55.206 }' 00:10:55.206 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.206 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.771 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:55.771 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.771 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.771 [2024-10-15 09:10:13.421996] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:55.771 [2024-10-15 09:10:13.422189] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:55.771 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.771 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:55.771 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.771 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.771 [2024-10-15 09:10:13.430022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:55.771 [2024-10-15 09:10:13.432158] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:55.771 [2024-10-15 09:10:13.432251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:55.772 [2024-10-15 09:10:13.432284] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:55.772 [2024-10-15 09:10:13.432312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:55.772 [2024-10-15 09:10:13.432333] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:55.772 [2024-10-15 09:10:13.432357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:55.772 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.772 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:55.772 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:55.772 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:55.772 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.772 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.772 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.772 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.772 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.772 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.772 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.772 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.772 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.772 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.772 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.772 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.772 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.772 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.772 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.772 "name": "Existed_Raid", 00:10:55.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.772 "strip_size_kb": 64, 00:10:55.772 "state": "configuring", 00:10:55.772 "raid_level": "concat", 00:10:55.772 "superblock": false, 00:10:55.772 "num_base_bdevs": 4, 00:10:55.772 "num_base_bdevs_discovered": 1, 00:10:55.772 "num_base_bdevs_operational": 4, 00:10:55.772 "base_bdevs_list": [ 00:10:55.772 { 00:10:55.772 "name": "BaseBdev1", 00:10:55.772 "uuid": "94f6493e-a219-48ca-a4be-cfa17fe577cb", 00:10:55.772 "is_configured": true, 00:10:55.772 "data_offset": 0, 00:10:55.772 "data_size": 65536 00:10:55.772 }, 00:10:55.772 { 00:10:55.772 "name": "BaseBdev2", 00:10:55.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.772 "is_configured": false, 00:10:55.772 "data_offset": 0, 00:10:55.772 "data_size": 0 00:10:55.772 }, 00:10:55.772 { 00:10:55.772 "name": "BaseBdev3", 00:10:55.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.772 "is_configured": false, 00:10:55.772 "data_offset": 0, 00:10:55.772 "data_size": 0 00:10:55.772 }, 00:10:55.772 { 00:10:55.772 "name": "BaseBdev4", 00:10:55.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.772 "is_configured": false, 00:10:55.772 "data_offset": 0, 00:10:55.772 "data_size": 0 00:10:55.772 } 00:10:55.772 ] 00:10:55.772 }' 00:10:55.772 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.772 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.031 [2024-10-15 09:10:13.838006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:56.031 BaseBdev2 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.031 [ 00:10:56.031 { 00:10:56.031 "name": "BaseBdev2", 00:10:56.031 "aliases": [ 00:10:56.031 "54a2f85f-c5a0-43f9-a306-915f785d9af4" 00:10:56.031 ], 00:10:56.031 "product_name": "Malloc disk", 00:10:56.031 "block_size": 512, 00:10:56.031 "num_blocks": 65536, 00:10:56.031 "uuid": "54a2f85f-c5a0-43f9-a306-915f785d9af4", 00:10:56.031 "assigned_rate_limits": { 00:10:56.031 "rw_ios_per_sec": 0, 00:10:56.031 "rw_mbytes_per_sec": 0, 00:10:56.031 "r_mbytes_per_sec": 0, 00:10:56.031 "w_mbytes_per_sec": 0 00:10:56.031 }, 00:10:56.031 "claimed": true, 00:10:56.031 "claim_type": "exclusive_write", 00:10:56.031 "zoned": false, 00:10:56.031 "supported_io_types": { 00:10:56.031 "read": true, 00:10:56.031 "write": true, 00:10:56.031 "unmap": true, 00:10:56.031 "flush": true, 00:10:56.031 "reset": true, 00:10:56.031 "nvme_admin": false, 00:10:56.031 "nvme_io": false, 00:10:56.031 "nvme_io_md": false, 00:10:56.031 "write_zeroes": true, 00:10:56.031 "zcopy": true, 00:10:56.031 "get_zone_info": false, 00:10:56.031 "zone_management": false, 00:10:56.031 "zone_append": false, 00:10:56.031 "compare": false, 00:10:56.031 "compare_and_write": false, 00:10:56.031 "abort": true, 00:10:56.031 "seek_hole": false, 00:10:56.031 "seek_data": false, 00:10:56.031 "copy": true, 00:10:56.031 "nvme_iov_md": false 00:10:56.031 }, 00:10:56.031 "memory_domains": [ 00:10:56.031 { 00:10:56.031 "dma_device_id": "system", 00:10:56.031 "dma_device_type": 1 00:10:56.031 }, 00:10:56.031 { 00:10:56.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.031 "dma_device_type": 2 00:10:56.031 } 00:10:56.031 ], 00:10:56.031 "driver_specific": {} 00:10:56.031 } 00:10:56.031 ] 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.031 "name": "Existed_Raid", 00:10:56.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.031 "strip_size_kb": 64, 00:10:56.031 "state": "configuring", 00:10:56.031 "raid_level": "concat", 00:10:56.031 "superblock": false, 00:10:56.031 "num_base_bdevs": 4, 00:10:56.031 "num_base_bdevs_discovered": 2, 00:10:56.031 "num_base_bdevs_operational": 4, 00:10:56.031 "base_bdevs_list": [ 00:10:56.031 { 00:10:56.031 "name": "BaseBdev1", 00:10:56.031 "uuid": "94f6493e-a219-48ca-a4be-cfa17fe577cb", 00:10:56.031 "is_configured": true, 00:10:56.031 "data_offset": 0, 00:10:56.031 "data_size": 65536 00:10:56.031 }, 00:10:56.031 { 00:10:56.031 "name": "BaseBdev2", 00:10:56.031 "uuid": "54a2f85f-c5a0-43f9-a306-915f785d9af4", 00:10:56.031 "is_configured": true, 00:10:56.031 "data_offset": 0, 00:10:56.031 "data_size": 65536 00:10:56.031 }, 00:10:56.031 { 00:10:56.031 "name": "BaseBdev3", 00:10:56.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.031 "is_configured": false, 00:10:56.031 "data_offset": 0, 00:10:56.031 "data_size": 0 00:10:56.031 }, 00:10:56.031 { 00:10:56.031 "name": "BaseBdev4", 00:10:56.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.031 "is_configured": false, 00:10:56.031 "data_offset": 0, 00:10:56.031 "data_size": 0 00:10:56.031 } 00:10:56.031 ] 00:10:56.031 }' 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.031 09:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.598 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:56.598 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.598 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.599 [2024-10-15 09:10:14.345871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:56.599 BaseBdev3 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.599 [ 00:10:56.599 { 00:10:56.599 "name": "BaseBdev3", 00:10:56.599 "aliases": [ 00:10:56.599 "60a7da31-1cc1-4656-bfdf-ef15159473d9" 00:10:56.599 ], 00:10:56.599 "product_name": "Malloc disk", 00:10:56.599 "block_size": 512, 00:10:56.599 "num_blocks": 65536, 00:10:56.599 "uuid": "60a7da31-1cc1-4656-bfdf-ef15159473d9", 00:10:56.599 "assigned_rate_limits": { 00:10:56.599 "rw_ios_per_sec": 0, 00:10:56.599 "rw_mbytes_per_sec": 0, 00:10:56.599 "r_mbytes_per_sec": 0, 00:10:56.599 "w_mbytes_per_sec": 0 00:10:56.599 }, 00:10:56.599 "claimed": true, 00:10:56.599 "claim_type": "exclusive_write", 00:10:56.599 "zoned": false, 00:10:56.599 "supported_io_types": { 00:10:56.599 "read": true, 00:10:56.599 "write": true, 00:10:56.599 "unmap": true, 00:10:56.599 "flush": true, 00:10:56.599 "reset": true, 00:10:56.599 "nvme_admin": false, 00:10:56.599 "nvme_io": false, 00:10:56.599 "nvme_io_md": false, 00:10:56.599 "write_zeroes": true, 00:10:56.599 "zcopy": true, 00:10:56.599 "get_zone_info": false, 00:10:56.599 "zone_management": false, 00:10:56.599 "zone_append": false, 00:10:56.599 "compare": false, 00:10:56.599 "compare_and_write": false, 00:10:56.599 "abort": true, 00:10:56.599 "seek_hole": false, 00:10:56.599 "seek_data": false, 00:10:56.599 "copy": true, 00:10:56.599 "nvme_iov_md": false 00:10:56.599 }, 00:10:56.599 "memory_domains": [ 00:10:56.599 { 00:10:56.599 "dma_device_id": "system", 00:10:56.599 "dma_device_type": 1 00:10:56.599 }, 00:10:56.599 { 00:10:56.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.599 "dma_device_type": 2 00:10:56.599 } 00:10:56.599 ], 00:10:56.599 "driver_specific": {} 00:10:56.599 } 00:10:56.599 ] 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.599 "name": "Existed_Raid", 00:10:56.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.599 "strip_size_kb": 64, 00:10:56.599 "state": "configuring", 00:10:56.599 "raid_level": "concat", 00:10:56.599 "superblock": false, 00:10:56.599 "num_base_bdevs": 4, 00:10:56.599 "num_base_bdevs_discovered": 3, 00:10:56.599 "num_base_bdevs_operational": 4, 00:10:56.599 "base_bdevs_list": [ 00:10:56.599 { 00:10:56.599 "name": "BaseBdev1", 00:10:56.599 "uuid": "94f6493e-a219-48ca-a4be-cfa17fe577cb", 00:10:56.599 "is_configured": true, 00:10:56.599 "data_offset": 0, 00:10:56.599 "data_size": 65536 00:10:56.599 }, 00:10:56.599 { 00:10:56.599 "name": "BaseBdev2", 00:10:56.599 "uuid": "54a2f85f-c5a0-43f9-a306-915f785d9af4", 00:10:56.599 "is_configured": true, 00:10:56.599 "data_offset": 0, 00:10:56.599 "data_size": 65536 00:10:56.599 }, 00:10:56.599 { 00:10:56.599 "name": "BaseBdev3", 00:10:56.599 "uuid": "60a7da31-1cc1-4656-bfdf-ef15159473d9", 00:10:56.599 "is_configured": true, 00:10:56.599 "data_offset": 0, 00:10:56.599 "data_size": 65536 00:10:56.599 }, 00:10:56.599 { 00:10:56.599 "name": "BaseBdev4", 00:10:56.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.599 "is_configured": false, 00:10:56.599 "data_offset": 0, 00:10:56.599 "data_size": 0 00:10:56.599 } 00:10:56.599 ] 00:10:56.599 }' 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.599 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.236 [2024-10-15 09:10:14.857920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:57.236 [2024-10-15 09:10:14.857984] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:57.236 [2024-10-15 09:10:14.857994] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:57.236 [2024-10-15 09:10:14.858298] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:57.236 [2024-10-15 09:10:14.858490] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:57.236 [2024-10-15 09:10:14.858506] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:57.236 [2024-10-15 09:10:14.858829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.236 BaseBdev4 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.236 [ 00:10:57.236 { 00:10:57.236 "name": "BaseBdev4", 00:10:57.236 "aliases": [ 00:10:57.236 "1a617e10-f845-4327-9950-6748b4555514" 00:10:57.236 ], 00:10:57.236 "product_name": "Malloc disk", 00:10:57.236 "block_size": 512, 00:10:57.236 "num_blocks": 65536, 00:10:57.236 "uuid": "1a617e10-f845-4327-9950-6748b4555514", 00:10:57.236 "assigned_rate_limits": { 00:10:57.236 "rw_ios_per_sec": 0, 00:10:57.236 "rw_mbytes_per_sec": 0, 00:10:57.236 "r_mbytes_per_sec": 0, 00:10:57.236 "w_mbytes_per_sec": 0 00:10:57.236 }, 00:10:57.236 "claimed": true, 00:10:57.236 "claim_type": "exclusive_write", 00:10:57.236 "zoned": false, 00:10:57.236 "supported_io_types": { 00:10:57.236 "read": true, 00:10:57.236 "write": true, 00:10:57.236 "unmap": true, 00:10:57.236 "flush": true, 00:10:57.236 "reset": true, 00:10:57.236 "nvme_admin": false, 00:10:57.236 "nvme_io": false, 00:10:57.236 "nvme_io_md": false, 00:10:57.236 "write_zeroes": true, 00:10:57.236 "zcopy": true, 00:10:57.236 "get_zone_info": false, 00:10:57.236 "zone_management": false, 00:10:57.236 "zone_append": false, 00:10:57.236 "compare": false, 00:10:57.236 "compare_and_write": false, 00:10:57.236 "abort": true, 00:10:57.236 "seek_hole": false, 00:10:57.236 "seek_data": false, 00:10:57.236 "copy": true, 00:10:57.236 "nvme_iov_md": false 00:10:57.236 }, 00:10:57.236 "memory_domains": [ 00:10:57.236 { 00:10:57.236 "dma_device_id": "system", 00:10:57.236 "dma_device_type": 1 00:10:57.236 }, 00:10:57.236 { 00:10:57.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.236 "dma_device_type": 2 00:10:57.236 } 00:10:57.236 ], 00:10:57.236 "driver_specific": {} 00:10:57.236 } 00:10:57.236 ] 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.236 "name": "Existed_Raid", 00:10:57.236 "uuid": "969265b8-6748-455f-b482-2d86af18773a", 00:10:57.236 "strip_size_kb": 64, 00:10:57.236 "state": "online", 00:10:57.236 "raid_level": "concat", 00:10:57.236 "superblock": false, 00:10:57.236 "num_base_bdevs": 4, 00:10:57.236 "num_base_bdevs_discovered": 4, 00:10:57.236 "num_base_bdevs_operational": 4, 00:10:57.236 "base_bdevs_list": [ 00:10:57.236 { 00:10:57.236 "name": "BaseBdev1", 00:10:57.236 "uuid": "94f6493e-a219-48ca-a4be-cfa17fe577cb", 00:10:57.236 "is_configured": true, 00:10:57.236 "data_offset": 0, 00:10:57.236 "data_size": 65536 00:10:57.236 }, 00:10:57.236 { 00:10:57.236 "name": "BaseBdev2", 00:10:57.236 "uuid": "54a2f85f-c5a0-43f9-a306-915f785d9af4", 00:10:57.236 "is_configured": true, 00:10:57.236 "data_offset": 0, 00:10:57.236 "data_size": 65536 00:10:57.236 }, 00:10:57.236 { 00:10:57.236 "name": "BaseBdev3", 00:10:57.236 "uuid": "60a7da31-1cc1-4656-bfdf-ef15159473d9", 00:10:57.236 "is_configured": true, 00:10:57.236 "data_offset": 0, 00:10:57.236 "data_size": 65536 00:10:57.236 }, 00:10:57.236 { 00:10:57.236 "name": "BaseBdev4", 00:10:57.236 "uuid": "1a617e10-f845-4327-9950-6748b4555514", 00:10:57.236 "is_configured": true, 00:10:57.236 "data_offset": 0, 00:10:57.236 "data_size": 65536 00:10:57.236 } 00:10:57.236 ] 00:10:57.236 }' 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.236 09:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.496 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:57.496 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:57.496 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:57.496 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:57.496 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:57.496 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:57.496 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:57.496 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:57.496 09:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.496 09:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.496 [2024-10-15 09:10:15.317846] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:57.496 09:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.496 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:57.496 "name": "Existed_Raid", 00:10:57.496 "aliases": [ 00:10:57.496 "969265b8-6748-455f-b482-2d86af18773a" 00:10:57.496 ], 00:10:57.496 "product_name": "Raid Volume", 00:10:57.496 "block_size": 512, 00:10:57.496 "num_blocks": 262144, 00:10:57.496 "uuid": "969265b8-6748-455f-b482-2d86af18773a", 00:10:57.496 "assigned_rate_limits": { 00:10:57.496 "rw_ios_per_sec": 0, 00:10:57.496 "rw_mbytes_per_sec": 0, 00:10:57.496 "r_mbytes_per_sec": 0, 00:10:57.496 "w_mbytes_per_sec": 0 00:10:57.496 }, 00:10:57.496 "claimed": false, 00:10:57.496 "zoned": false, 00:10:57.496 "supported_io_types": { 00:10:57.496 "read": true, 00:10:57.496 "write": true, 00:10:57.496 "unmap": true, 00:10:57.496 "flush": true, 00:10:57.496 "reset": true, 00:10:57.496 "nvme_admin": false, 00:10:57.496 "nvme_io": false, 00:10:57.496 "nvme_io_md": false, 00:10:57.496 "write_zeroes": true, 00:10:57.496 "zcopy": false, 00:10:57.496 "get_zone_info": false, 00:10:57.496 "zone_management": false, 00:10:57.496 "zone_append": false, 00:10:57.496 "compare": false, 00:10:57.496 "compare_and_write": false, 00:10:57.496 "abort": false, 00:10:57.497 "seek_hole": false, 00:10:57.497 "seek_data": false, 00:10:57.497 "copy": false, 00:10:57.497 "nvme_iov_md": false 00:10:57.497 }, 00:10:57.497 "memory_domains": [ 00:10:57.497 { 00:10:57.497 "dma_device_id": "system", 00:10:57.497 "dma_device_type": 1 00:10:57.497 }, 00:10:57.497 { 00:10:57.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.497 "dma_device_type": 2 00:10:57.497 }, 00:10:57.497 { 00:10:57.497 "dma_device_id": "system", 00:10:57.497 "dma_device_type": 1 00:10:57.497 }, 00:10:57.497 { 00:10:57.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.497 "dma_device_type": 2 00:10:57.497 }, 00:10:57.497 { 00:10:57.497 "dma_device_id": "system", 00:10:57.497 "dma_device_type": 1 00:10:57.497 }, 00:10:57.497 { 00:10:57.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.497 "dma_device_type": 2 00:10:57.497 }, 00:10:57.497 { 00:10:57.497 "dma_device_id": "system", 00:10:57.497 "dma_device_type": 1 00:10:57.497 }, 00:10:57.497 { 00:10:57.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.497 "dma_device_type": 2 00:10:57.497 } 00:10:57.497 ], 00:10:57.497 "driver_specific": { 00:10:57.497 "raid": { 00:10:57.497 "uuid": "969265b8-6748-455f-b482-2d86af18773a", 00:10:57.497 "strip_size_kb": 64, 00:10:57.497 "state": "online", 00:10:57.497 "raid_level": "concat", 00:10:57.497 "superblock": false, 00:10:57.497 "num_base_bdevs": 4, 00:10:57.497 "num_base_bdevs_discovered": 4, 00:10:57.497 "num_base_bdevs_operational": 4, 00:10:57.497 "base_bdevs_list": [ 00:10:57.497 { 00:10:57.497 "name": "BaseBdev1", 00:10:57.497 "uuid": "94f6493e-a219-48ca-a4be-cfa17fe577cb", 00:10:57.497 "is_configured": true, 00:10:57.497 "data_offset": 0, 00:10:57.497 "data_size": 65536 00:10:57.497 }, 00:10:57.497 { 00:10:57.497 "name": "BaseBdev2", 00:10:57.497 "uuid": "54a2f85f-c5a0-43f9-a306-915f785d9af4", 00:10:57.497 "is_configured": true, 00:10:57.497 "data_offset": 0, 00:10:57.497 "data_size": 65536 00:10:57.497 }, 00:10:57.497 { 00:10:57.497 "name": "BaseBdev3", 00:10:57.497 "uuid": "60a7da31-1cc1-4656-bfdf-ef15159473d9", 00:10:57.497 "is_configured": true, 00:10:57.497 "data_offset": 0, 00:10:57.497 "data_size": 65536 00:10:57.497 }, 00:10:57.497 { 00:10:57.497 "name": "BaseBdev4", 00:10:57.497 "uuid": "1a617e10-f845-4327-9950-6748b4555514", 00:10:57.497 "is_configured": true, 00:10:57.497 "data_offset": 0, 00:10:57.497 "data_size": 65536 00:10:57.497 } 00:10:57.497 ] 00:10:57.497 } 00:10:57.497 } 00:10:57.497 }' 00:10:57.497 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:57.755 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:57.755 BaseBdev2 00:10:57.755 BaseBdev3 00:10:57.755 BaseBdev4' 00:10:57.755 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.755 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:57.755 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.755 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:57.755 09:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.755 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.755 09:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.755 09:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.755 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.755 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.755 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.755 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:57.755 09:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.755 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.755 09:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.755 09:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.755 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.755 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.755 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.755 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:57.755 09:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.755 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.755 09:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.755 09:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.755 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.755 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.756 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.756 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.756 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:57.756 09:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.756 09:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.756 09:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.756 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.756 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.756 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:57.756 09:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.756 09:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.756 [2024-10-15 09:10:15.637027] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:57.756 [2024-10-15 09:10:15.637080] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:57.756 [2024-10-15 09:10:15.637140] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:58.028 09:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.028 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:58.028 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:58.028 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:58.028 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:58.028 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:58.028 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:58.028 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.028 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:58.028 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.028 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.028 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:58.028 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.028 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.028 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.028 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.028 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.028 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.028 09:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.028 09:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.028 09:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.028 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.028 "name": "Existed_Raid", 00:10:58.028 "uuid": "969265b8-6748-455f-b482-2d86af18773a", 00:10:58.028 "strip_size_kb": 64, 00:10:58.028 "state": "offline", 00:10:58.028 "raid_level": "concat", 00:10:58.028 "superblock": false, 00:10:58.028 "num_base_bdevs": 4, 00:10:58.028 "num_base_bdevs_discovered": 3, 00:10:58.028 "num_base_bdevs_operational": 3, 00:10:58.028 "base_bdevs_list": [ 00:10:58.028 { 00:10:58.028 "name": null, 00:10:58.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.028 "is_configured": false, 00:10:58.028 "data_offset": 0, 00:10:58.028 "data_size": 65536 00:10:58.028 }, 00:10:58.028 { 00:10:58.028 "name": "BaseBdev2", 00:10:58.028 "uuid": "54a2f85f-c5a0-43f9-a306-915f785d9af4", 00:10:58.028 "is_configured": true, 00:10:58.028 "data_offset": 0, 00:10:58.028 "data_size": 65536 00:10:58.028 }, 00:10:58.028 { 00:10:58.028 "name": "BaseBdev3", 00:10:58.028 "uuid": "60a7da31-1cc1-4656-bfdf-ef15159473d9", 00:10:58.028 "is_configured": true, 00:10:58.028 "data_offset": 0, 00:10:58.028 "data_size": 65536 00:10:58.028 }, 00:10:58.028 { 00:10:58.028 "name": "BaseBdev4", 00:10:58.028 "uuid": "1a617e10-f845-4327-9950-6748b4555514", 00:10:58.028 "is_configured": true, 00:10:58.028 "data_offset": 0, 00:10:58.028 "data_size": 65536 00:10:58.028 } 00:10:58.028 ] 00:10:58.028 }' 00:10:58.028 09:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.028 09:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.596 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:58.596 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:58.596 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:58.596 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.596 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.596 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.596 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.596 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:58.596 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:58.596 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:58.596 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.596 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.596 [2024-10-15 09:10:16.251288] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:58.596 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.596 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:58.596 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:58.596 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.596 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:58.596 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.596 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.596 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.596 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:58.596 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:58.596 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:58.596 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.596 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.596 [2024-10-15 09:10:16.417035] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:58.854 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.854 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:58.854 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:58.854 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.854 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.854 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:58.854 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.854 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.854 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:58.854 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:58.854 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:58.854 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.854 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.854 [2024-10-15 09:10:16.590587] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:58.854 [2024-10-15 09:10:16.590771] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:58.854 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.854 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:58.854 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:58.854 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.854 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:58.854 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.854 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.854 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.112 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:59.112 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:59.112 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:59.112 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:59.112 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:59.112 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:59.112 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.112 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.112 BaseBdev2 00:10:59.112 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.112 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:59.112 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:59.112 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:59.112 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:59.112 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:59.112 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:59.112 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:59.112 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.112 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.112 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.112 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:59.112 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.112 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.112 [ 00:10:59.112 { 00:10:59.112 "name": "BaseBdev2", 00:10:59.112 "aliases": [ 00:10:59.112 "ad70116b-3cdd-46c0-872c-0f0c7cae3e9b" 00:10:59.112 ], 00:10:59.112 "product_name": "Malloc disk", 00:10:59.112 "block_size": 512, 00:10:59.112 "num_blocks": 65536, 00:10:59.112 "uuid": "ad70116b-3cdd-46c0-872c-0f0c7cae3e9b", 00:10:59.112 "assigned_rate_limits": { 00:10:59.112 "rw_ios_per_sec": 0, 00:10:59.112 "rw_mbytes_per_sec": 0, 00:10:59.112 "r_mbytes_per_sec": 0, 00:10:59.112 "w_mbytes_per_sec": 0 00:10:59.112 }, 00:10:59.112 "claimed": false, 00:10:59.112 "zoned": false, 00:10:59.112 "supported_io_types": { 00:10:59.112 "read": true, 00:10:59.112 "write": true, 00:10:59.112 "unmap": true, 00:10:59.112 "flush": true, 00:10:59.112 "reset": true, 00:10:59.112 "nvme_admin": false, 00:10:59.112 "nvme_io": false, 00:10:59.112 "nvme_io_md": false, 00:10:59.112 "write_zeroes": true, 00:10:59.112 "zcopy": true, 00:10:59.112 "get_zone_info": false, 00:10:59.112 "zone_management": false, 00:10:59.112 "zone_append": false, 00:10:59.112 "compare": false, 00:10:59.112 "compare_and_write": false, 00:10:59.112 "abort": true, 00:10:59.112 "seek_hole": false, 00:10:59.112 "seek_data": false, 00:10:59.112 "copy": true, 00:10:59.112 "nvme_iov_md": false 00:10:59.112 }, 00:10:59.112 "memory_domains": [ 00:10:59.112 { 00:10:59.112 "dma_device_id": "system", 00:10:59.112 "dma_device_type": 1 00:10:59.112 }, 00:10:59.112 { 00:10:59.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.112 "dma_device_type": 2 00:10:59.112 } 00:10:59.112 ], 00:10:59.112 "driver_specific": {} 00:10:59.112 } 00:10:59.112 ] 00:10:59.112 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.112 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:59.112 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:59.112 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:59.112 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:59.112 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.112 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.112 BaseBdev3 00:10:59.112 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.112 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:59.112 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:59.113 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:59.113 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:59.113 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:59.113 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:59.113 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:59.113 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.113 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.113 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.113 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:59.113 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.113 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.113 [ 00:10:59.113 { 00:10:59.113 "name": "BaseBdev3", 00:10:59.113 "aliases": [ 00:10:59.113 "666d0129-8352-4c6f-a803-2ae688ac7777" 00:10:59.113 ], 00:10:59.113 "product_name": "Malloc disk", 00:10:59.113 "block_size": 512, 00:10:59.113 "num_blocks": 65536, 00:10:59.113 "uuid": "666d0129-8352-4c6f-a803-2ae688ac7777", 00:10:59.113 "assigned_rate_limits": { 00:10:59.113 "rw_ios_per_sec": 0, 00:10:59.113 "rw_mbytes_per_sec": 0, 00:10:59.113 "r_mbytes_per_sec": 0, 00:10:59.113 "w_mbytes_per_sec": 0 00:10:59.113 }, 00:10:59.113 "claimed": false, 00:10:59.113 "zoned": false, 00:10:59.113 "supported_io_types": { 00:10:59.113 "read": true, 00:10:59.113 "write": true, 00:10:59.113 "unmap": true, 00:10:59.113 "flush": true, 00:10:59.113 "reset": true, 00:10:59.113 "nvme_admin": false, 00:10:59.113 "nvme_io": false, 00:10:59.113 "nvme_io_md": false, 00:10:59.113 "write_zeroes": true, 00:10:59.113 "zcopy": true, 00:10:59.113 "get_zone_info": false, 00:10:59.113 "zone_management": false, 00:10:59.113 "zone_append": false, 00:10:59.113 "compare": false, 00:10:59.113 "compare_and_write": false, 00:10:59.113 "abort": true, 00:10:59.113 "seek_hole": false, 00:10:59.113 "seek_data": false, 00:10:59.113 "copy": true, 00:10:59.113 "nvme_iov_md": false 00:10:59.113 }, 00:10:59.113 "memory_domains": [ 00:10:59.113 { 00:10:59.113 "dma_device_id": "system", 00:10:59.113 "dma_device_type": 1 00:10:59.113 }, 00:10:59.113 { 00:10:59.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.113 "dma_device_type": 2 00:10:59.113 } 00:10:59.113 ], 00:10:59.113 "driver_specific": {} 00:10:59.113 } 00:10:59.113 ] 00:10:59.113 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.113 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:59.113 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:59.113 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:59.113 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:59.113 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.113 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.113 BaseBdev4 00:10:59.113 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.113 09:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:59.113 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:59.113 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:59.113 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:59.113 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:59.113 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:59.113 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:59.113 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.113 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.113 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.113 09:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:59.113 09:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.113 09:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.371 [ 00:10:59.371 { 00:10:59.371 "name": "BaseBdev4", 00:10:59.371 "aliases": [ 00:10:59.371 "fbf9dcbe-ed7c-475b-a8cc-121901dcf5f2" 00:10:59.372 ], 00:10:59.372 "product_name": "Malloc disk", 00:10:59.372 "block_size": 512, 00:10:59.372 "num_blocks": 65536, 00:10:59.372 "uuid": "fbf9dcbe-ed7c-475b-a8cc-121901dcf5f2", 00:10:59.372 "assigned_rate_limits": { 00:10:59.372 "rw_ios_per_sec": 0, 00:10:59.372 "rw_mbytes_per_sec": 0, 00:10:59.372 "r_mbytes_per_sec": 0, 00:10:59.372 "w_mbytes_per_sec": 0 00:10:59.372 }, 00:10:59.372 "claimed": false, 00:10:59.372 "zoned": false, 00:10:59.372 "supported_io_types": { 00:10:59.372 "read": true, 00:10:59.372 "write": true, 00:10:59.372 "unmap": true, 00:10:59.372 "flush": true, 00:10:59.372 "reset": true, 00:10:59.372 "nvme_admin": false, 00:10:59.372 "nvme_io": false, 00:10:59.372 "nvme_io_md": false, 00:10:59.372 "write_zeroes": true, 00:10:59.372 "zcopy": true, 00:10:59.372 "get_zone_info": false, 00:10:59.372 "zone_management": false, 00:10:59.372 "zone_append": false, 00:10:59.372 "compare": false, 00:10:59.372 "compare_and_write": false, 00:10:59.372 "abort": true, 00:10:59.372 "seek_hole": false, 00:10:59.372 "seek_data": false, 00:10:59.372 "copy": true, 00:10:59.372 "nvme_iov_md": false 00:10:59.372 }, 00:10:59.372 "memory_domains": [ 00:10:59.372 { 00:10:59.372 "dma_device_id": "system", 00:10:59.372 "dma_device_type": 1 00:10:59.372 }, 00:10:59.372 { 00:10:59.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.372 "dma_device_type": 2 00:10:59.372 } 00:10:59.372 ], 00:10:59.372 "driver_specific": {} 00:10:59.372 } 00:10:59.372 ] 00:10:59.372 09:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.372 09:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:59.372 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:59.372 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:59.372 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:59.372 09:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.372 09:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.372 [2024-10-15 09:10:17.031107] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:59.372 [2024-10-15 09:10:17.031252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:59.372 [2024-10-15 09:10:17.031313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:59.372 [2024-10-15 09:10:17.033504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:59.372 [2024-10-15 09:10:17.033630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:59.372 09:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.372 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:59.372 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.372 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.372 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.372 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.372 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.372 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.372 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.372 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.372 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.372 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.372 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.372 09:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.372 09:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.372 09:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.372 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.372 "name": "Existed_Raid", 00:10:59.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.372 "strip_size_kb": 64, 00:10:59.372 "state": "configuring", 00:10:59.372 "raid_level": "concat", 00:10:59.372 "superblock": false, 00:10:59.372 "num_base_bdevs": 4, 00:10:59.372 "num_base_bdevs_discovered": 3, 00:10:59.372 "num_base_bdevs_operational": 4, 00:10:59.372 "base_bdevs_list": [ 00:10:59.372 { 00:10:59.372 "name": "BaseBdev1", 00:10:59.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.372 "is_configured": false, 00:10:59.372 "data_offset": 0, 00:10:59.372 "data_size": 0 00:10:59.372 }, 00:10:59.372 { 00:10:59.372 "name": "BaseBdev2", 00:10:59.372 "uuid": "ad70116b-3cdd-46c0-872c-0f0c7cae3e9b", 00:10:59.372 "is_configured": true, 00:10:59.372 "data_offset": 0, 00:10:59.372 "data_size": 65536 00:10:59.372 }, 00:10:59.372 { 00:10:59.372 "name": "BaseBdev3", 00:10:59.372 "uuid": "666d0129-8352-4c6f-a803-2ae688ac7777", 00:10:59.372 "is_configured": true, 00:10:59.372 "data_offset": 0, 00:10:59.372 "data_size": 65536 00:10:59.372 }, 00:10:59.372 { 00:10:59.372 "name": "BaseBdev4", 00:10:59.372 "uuid": "fbf9dcbe-ed7c-475b-a8cc-121901dcf5f2", 00:10:59.372 "is_configured": true, 00:10:59.372 "data_offset": 0, 00:10:59.372 "data_size": 65536 00:10:59.372 } 00:10:59.372 ] 00:10:59.372 }' 00:10:59.372 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.372 09:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.631 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:59.631 09:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.631 09:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.631 [2024-10-15 09:10:17.494347] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:59.631 09:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.631 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:59.631 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.631 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.631 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.631 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.631 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.631 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.631 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.631 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.631 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.631 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.631 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.631 09:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.631 09:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.889 09:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.889 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.889 "name": "Existed_Raid", 00:10:59.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.889 "strip_size_kb": 64, 00:10:59.889 "state": "configuring", 00:10:59.889 "raid_level": "concat", 00:10:59.889 "superblock": false, 00:10:59.889 "num_base_bdevs": 4, 00:10:59.889 "num_base_bdevs_discovered": 2, 00:10:59.889 "num_base_bdevs_operational": 4, 00:10:59.889 "base_bdevs_list": [ 00:10:59.889 { 00:10:59.889 "name": "BaseBdev1", 00:10:59.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.889 "is_configured": false, 00:10:59.889 "data_offset": 0, 00:10:59.889 "data_size": 0 00:10:59.889 }, 00:10:59.889 { 00:10:59.889 "name": null, 00:10:59.889 "uuid": "ad70116b-3cdd-46c0-872c-0f0c7cae3e9b", 00:10:59.889 "is_configured": false, 00:10:59.889 "data_offset": 0, 00:10:59.889 "data_size": 65536 00:10:59.889 }, 00:10:59.889 { 00:10:59.889 "name": "BaseBdev3", 00:10:59.889 "uuid": "666d0129-8352-4c6f-a803-2ae688ac7777", 00:10:59.889 "is_configured": true, 00:10:59.889 "data_offset": 0, 00:10:59.889 "data_size": 65536 00:10:59.889 }, 00:10:59.889 { 00:10:59.889 "name": "BaseBdev4", 00:10:59.889 "uuid": "fbf9dcbe-ed7c-475b-a8cc-121901dcf5f2", 00:10:59.889 "is_configured": true, 00:10:59.889 "data_offset": 0, 00:10:59.889 "data_size": 65536 00:10:59.889 } 00:10:59.889 ] 00:10:59.889 }' 00:10:59.889 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.889 09:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.147 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.147 09:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.147 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:00.147 09:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.147 09:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.147 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:00.147 09:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:00.147 09:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.147 09:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.147 [2024-10-15 09:10:18.007490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:00.147 BaseBdev1 00:11:00.147 09:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.147 09:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:00.147 09:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:00.147 09:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:00.147 09:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:00.147 09:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:00.147 09:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:00.147 09:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:00.147 09:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.147 09:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.147 09:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.147 09:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:00.147 09:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.147 09:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.147 [ 00:11:00.147 { 00:11:00.147 "name": "BaseBdev1", 00:11:00.147 "aliases": [ 00:11:00.147 "09db813a-a591-47ac-afc6-8e06238eb94a" 00:11:00.147 ], 00:11:00.147 "product_name": "Malloc disk", 00:11:00.147 "block_size": 512, 00:11:00.147 "num_blocks": 65536, 00:11:00.147 "uuid": "09db813a-a591-47ac-afc6-8e06238eb94a", 00:11:00.147 "assigned_rate_limits": { 00:11:00.147 "rw_ios_per_sec": 0, 00:11:00.147 "rw_mbytes_per_sec": 0, 00:11:00.147 "r_mbytes_per_sec": 0, 00:11:00.147 "w_mbytes_per_sec": 0 00:11:00.147 }, 00:11:00.147 "claimed": true, 00:11:00.147 "claim_type": "exclusive_write", 00:11:00.147 "zoned": false, 00:11:00.147 "supported_io_types": { 00:11:00.147 "read": true, 00:11:00.147 "write": true, 00:11:00.147 "unmap": true, 00:11:00.147 "flush": true, 00:11:00.147 "reset": true, 00:11:00.147 "nvme_admin": false, 00:11:00.406 "nvme_io": false, 00:11:00.406 "nvme_io_md": false, 00:11:00.406 "write_zeroes": true, 00:11:00.406 "zcopy": true, 00:11:00.406 "get_zone_info": false, 00:11:00.406 "zone_management": false, 00:11:00.406 "zone_append": false, 00:11:00.406 "compare": false, 00:11:00.406 "compare_and_write": false, 00:11:00.406 "abort": true, 00:11:00.406 "seek_hole": false, 00:11:00.406 "seek_data": false, 00:11:00.406 "copy": true, 00:11:00.406 "nvme_iov_md": false 00:11:00.406 }, 00:11:00.406 "memory_domains": [ 00:11:00.406 { 00:11:00.406 "dma_device_id": "system", 00:11:00.406 "dma_device_type": 1 00:11:00.406 }, 00:11:00.406 { 00:11:00.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.406 "dma_device_type": 2 00:11:00.406 } 00:11:00.406 ], 00:11:00.406 "driver_specific": {} 00:11:00.406 } 00:11:00.406 ] 00:11:00.406 09:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.406 09:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:00.406 09:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:00.406 09:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.406 09:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.406 09:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.406 09:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.406 09:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.406 09:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.406 09:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.406 09:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.406 09:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.406 09:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.406 09:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.406 09:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.406 09:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.406 09:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.406 09:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.406 "name": "Existed_Raid", 00:11:00.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.406 "strip_size_kb": 64, 00:11:00.406 "state": "configuring", 00:11:00.406 "raid_level": "concat", 00:11:00.406 "superblock": false, 00:11:00.406 "num_base_bdevs": 4, 00:11:00.406 "num_base_bdevs_discovered": 3, 00:11:00.406 "num_base_bdevs_operational": 4, 00:11:00.406 "base_bdevs_list": [ 00:11:00.406 { 00:11:00.406 "name": "BaseBdev1", 00:11:00.406 "uuid": "09db813a-a591-47ac-afc6-8e06238eb94a", 00:11:00.406 "is_configured": true, 00:11:00.406 "data_offset": 0, 00:11:00.406 "data_size": 65536 00:11:00.406 }, 00:11:00.406 { 00:11:00.406 "name": null, 00:11:00.406 "uuid": "ad70116b-3cdd-46c0-872c-0f0c7cae3e9b", 00:11:00.406 "is_configured": false, 00:11:00.406 "data_offset": 0, 00:11:00.406 "data_size": 65536 00:11:00.406 }, 00:11:00.406 { 00:11:00.406 "name": "BaseBdev3", 00:11:00.406 "uuid": "666d0129-8352-4c6f-a803-2ae688ac7777", 00:11:00.406 "is_configured": true, 00:11:00.406 "data_offset": 0, 00:11:00.406 "data_size": 65536 00:11:00.406 }, 00:11:00.406 { 00:11:00.406 "name": "BaseBdev4", 00:11:00.406 "uuid": "fbf9dcbe-ed7c-475b-a8cc-121901dcf5f2", 00:11:00.406 "is_configured": true, 00:11:00.406 "data_offset": 0, 00:11:00.406 "data_size": 65536 00:11:00.406 } 00:11:00.406 ] 00:11:00.406 }' 00:11:00.406 09:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.406 09:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.664 09:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.664 09:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.664 09:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.664 09:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:00.664 09:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.922 09:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:00.922 09:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:00.922 09:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.922 09:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.922 [2024-10-15 09:10:18.582738] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:00.922 09:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.922 09:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:00.922 09:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.922 09:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.922 09:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.922 09:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.922 09:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.923 09:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.923 09:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.923 09:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.923 09:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.923 09:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.923 09:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.923 09:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.923 09:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.923 09:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.923 09:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.923 "name": "Existed_Raid", 00:11:00.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.923 "strip_size_kb": 64, 00:11:00.923 "state": "configuring", 00:11:00.923 "raid_level": "concat", 00:11:00.923 "superblock": false, 00:11:00.923 "num_base_bdevs": 4, 00:11:00.923 "num_base_bdevs_discovered": 2, 00:11:00.923 "num_base_bdevs_operational": 4, 00:11:00.923 "base_bdevs_list": [ 00:11:00.923 { 00:11:00.923 "name": "BaseBdev1", 00:11:00.923 "uuid": "09db813a-a591-47ac-afc6-8e06238eb94a", 00:11:00.923 "is_configured": true, 00:11:00.923 "data_offset": 0, 00:11:00.923 "data_size": 65536 00:11:00.923 }, 00:11:00.923 { 00:11:00.923 "name": null, 00:11:00.923 "uuid": "ad70116b-3cdd-46c0-872c-0f0c7cae3e9b", 00:11:00.923 "is_configured": false, 00:11:00.923 "data_offset": 0, 00:11:00.923 "data_size": 65536 00:11:00.923 }, 00:11:00.923 { 00:11:00.923 "name": null, 00:11:00.923 "uuid": "666d0129-8352-4c6f-a803-2ae688ac7777", 00:11:00.923 "is_configured": false, 00:11:00.923 "data_offset": 0, 00:11:00.923 "data_size": 65536 00:11:00.923 }, 00:11:00.923 { 00:11:00.923 "name": "BaseBdev4", 00:11:00.923 "uuid": "fbf9dcbe-ed7c-475b-a8cc-121901dcf5f2", 00:11:00.923 "is_configured": true, 00:11:00.923 "data_offset": 0, 00:11:00.923 "data_size": 65536 00:11:00.923 } 00:11:00.923 ] 00:11:00.923 }' 00:11:00.923 09:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.923 09:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.185 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.185 09:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.185 09:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.185 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:01.185 09:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.444 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:01.444 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:01.444 09:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.444 09:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.444 [2024-10-15 09:10:19.113868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:01.444 09:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.444 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:01.444 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.444 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.444 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.444 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.444 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.444 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.444 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.444 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.444 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.444 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.444 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.444 09:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.444 09:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.444 09:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.444 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.444 "name": "Existed_Raid", 00:11:01.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.444 "strip_size_kb": 64, 00:11:01.444 "state": "configuring", 00:11:01.444 "raid_level": "concat", 00:11:01.444 "superblock": false, 00:11:01.444 "num_base_bdevs": 4, 00:11:01.444 "num_base_bdevs_discovered": 3, 00:11:01.444 "num_base_bdevs_operational": 4, 00:11:01.444 "base_bdevs_list": [ 00:11:01.444 { 00:11:01.444 "name": "BaseBdev1", 00:11:01.444 "uuid": "09db813a-a591-47ac-afc6-8e06238eb94a", 00:11:01.444 "is_configured": true, 00:11:01.444 "data_offset": 0, 00:11:01.444 "data_size": 65536 00:11:01.444 }, 00:11:01.444 { 00:11:01.444 "name": null, 00:11:01.444 "uuid": "ad70116b-3cdd-46c0-872c-0f0c7cae3e9b", 00:11:01.444 "is_configured": false, 00:11:01.444 "data_offset": 0, 00:11:01.444 "data_size": 65536 00:11:01.444 }, 00:11:01.444 { 00:11:01.444 "name": "BaseBdev3", 00:11:01.444 "uuid": "666d0129-8352-4c6f-a803-2ae688ac7777", 00:11:01.444 "is_configured": true, 00:11:01.444 "data_offset": 0, 00:11:01.444 "data_size": 65536 00:11:01.444 }, 00:11:01.444 { 00:11:01.444 "name": "BaseBdev4", 00:11:01.444 "uuid": "fbf9dcbe-ed7c-475b-a8cc-121901dcf5f2", 00:11:01.444 "is_configured": true, 00:11:01.444 "data_offset": 0, 00:11:01.444 "data_size": 65536 00:11:01.444 } 00:11:01.444 ] 00:11:01.444 }' 00:11:01.444 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.444 09:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.705 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.705 09:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.705 09:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.705 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:01.705 09:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.965 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:01.965 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:01.965 09:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.965 09:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.965 [2024-10-15 09:10:19.633004] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:01.965 09:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.965 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:01.965 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.965 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.965 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.965 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.965 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.965 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.965 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.965 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.965 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.965 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.965 09:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.965 09:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.965 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.965 09:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.965 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.965 "name": "Existed_Raid", 00:11:01.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.965 "strip_size_kb": 64, 00:11:01.965 "state": "configuring", 00:11:01.965 "raid_level": "concat", 00:11:01.965 "superblock": false, 00:11:01.965 "num_base_bdevs": 4, 00:11:01.965 "num_base_bdevs_discovered": 2, 00:11:01.965 "num_base_bdevs_operational": 4, 00:11:01.965 "base_bdevs_list": [ 00:11:01.965 { 00:11:01.965 "name": null, 00:11:01.965 "uuid": "09db813a-a591-47ac-afc6-8e06238eb94a", 00:11:01.965 "is_configured": false, 00:11:01.965 "data_offset": 0, 00:11:01.965 "data_size": 65536 00:11:01.965 }, 00:11:01.965 { 00:11:01.965 "name": null, 00:11:01.965 "uuid": "ad70116b-3cdd-46c0-872c-0f0c7cae3e9b", 00:11:01.965 "is_configured": false, 00:11:01.965 "data_offset": 0, 00:11:01.965 "data_size": 65536 00:11:01.965 }, 00:11:01.965 { 00:11:01.965 "name": "BaseBdev3", 00:11:01.965 "uuid": "666d0129-8352-4c6f-a803-2ae688ac7777", 00:11:01.965 "is_configured": true, 00:11:01.965 "data_offset": 0, 00:11:01.965 "data_size": 65536 00:11:01.965 }, 00:11:01.965 { 00:11:01.965 "name": "BaseBdev4", 00:11:01.965 "uuid": "fbf9dcbe-ed7c-475b-a8cc-121901dcf5f2", 00:11:01.965 "is_configured": true, 00:11:01.965 "data_offset": 0, 00:11:01.965 "data_size": 65536 00:11:01.965 } 00:11:01.965 ] 00:11:01.965 }' 00:11:01.965 09:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.965 09:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.533 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.533 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:02.533 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.533 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.533 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.533 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:02.533 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:02.533 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.533 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.533 [2024-10-15 09:10:20.241878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:02.533 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.533 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:02.533 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.533 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.533 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:02.533 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.533 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.533 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.533 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.533 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.533 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.533 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.533 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.533 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.533 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.533 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.533 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.533 "name": "Existed_Raid", 00:11:02.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.533 "strip_size_kb": 64, 00:11:02.533 "state": "configuring", 00:11:02.533 "raid_level": "concat", 00:11:02.533 "superblock": false, 00:11:02.533 "num_base_bdevs": 4, 00:11:02.533 "num_base_bdevs_discovered": 3, 00:11:02.533 "num_base_bdevs_operational": 4, 00:11:02.533 "base_bdevs_list": [ 00:11:02.533 { 00:11:02.533 "name": null, 00:11:02.533 "uuid": "09db813a-a591-47ac-afc6-8e06238eb94a", 00:11:02.533 "is_configured": false, 00:11:02.533 "data_offset": 0, 00:11:02.533 "data_size": 65536 00:11:02.533 }, 00:11:02.533 { 00:11:02.533 "name": "BaseBdev2", 00:11:02.533 "uuid": "ad70116b-3cdd-46c0-872c-0f0c7cae3e9b", 00:11:02.533 "is_configured": true, 00:11:02.533 "data_offset": 0, 00:11:02.533 "data_size": 65536 00:11:02.533 }, 00:11:02.533 { 00:11:02.534 "name": "BaseBdev3", 00:11:02.534 "uuid": "666d0129-8352-4c6f-a803-2ae688ac7777", 00:11:02.534 "is_configured": true, 00:11:02.534 "data_offset": 0, 00:11:02.534 "data_size": 65536 00:11:02.534 }, 00:11:02.534 { 00:11:02.534 "name": "BaseBdev4", 00:11:02.534 "uuid": "fbf9dcbe-ed7c-475b-a8cc-121901dcf5f2", 00:11:02.534 "is_configured": true, 00:11:02.534 "data_offset": 0, 00:11:02.534 "data_size": 65536 00:11:02.534 } 00:11:02.534 ] 00:11:02.534 }' 00:11:02.534 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.534 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.101 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.101 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.101 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.101 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:03.101 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.101 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:03.101 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.101 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.101 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.101 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:03.101 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.101 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 09db813a-a591-47ac-afc6-8e06238eb94a 00:11:03.101 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.101 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.101 [2024-10-15 09:10:20.850781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:03.101 [2024-10-15 09:10:20.850948] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:03.101 [2024-10-15 09:10:20.850978] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:03.101 [2024-10-15 09:10:20.851334] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:03.101 [2024-10-15 09:10:20.851570] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:03.101 [2024-10-15 09:10:20.851624] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:03.101 [2024-10-15 09:10:20.851991] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.101 NewBaseBdev 00:11:03.101 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.101 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:03.101 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:03.101 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:03.101 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:03.101 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:03.101 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:03.101 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:03.101 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.101 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.101 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.101 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:03.101 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.101 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.101 [ 00:11:03.101 { 00:11:03.101 "name": "NewBaseBdev", 00:11:03.101 "aliases": [ 00:11:03.101 "09db813a-a591-47ac-afc6-8e06238eb94a" 00:11:03.101 ], 00:11:03.101 "product_name": "Malloc disk", 00:11:03.101 "block_size": 512, 00:11:03.101 "num_blocks": 65536, 00:11:03.101 "uuid": "09db813a-a591-47ac-afc6-8e06238eb94a", 00:11:03.101 "assigned_rate_limits": { 00:11:03.101 "rw_ios_per_sec": 0, 00:11:03.102 "rw_mbytes_per_sec": 0, 00:11:03.102 "r_mbytes_per_sec": 0, 00:11:03.102 "w_mbytes_per_sec": 0 00:11:03.102 }, 00:11:03.102 "claimed": true, 00:11:03.102 "claim_type": "exclusive_write", 00:11:03.102 "zoned": false, 00:11:03.102 "supported_io_types": { 00:11:03.102 "read": true, 00:11:03.102 "write": true, 00:11:03.102 "unmap": true, 00:11:03.102 "flush": true, 00:11:03.102 "reset": true, 00:11:03.102 "nvme_admin": false, 00:11:03.102 "nvme_io": false, 00:11:03.102 "nvme_io_md": false, 00:11:03.102 "write_zeroes": true, 00:11:03.102 "zcopy": true, 00:11:03.102 "get_zone_info": false, 00:11:03.102 "zone_management": false, 00:11:03.102 "zone_append": false, 00:11:03.102 "compare": false, 00:11:03.102 "compare_and_write": false, 00:11:03.102 "abort": true, 00:11:03.102 "seek_hole": false, 00:11:03.102 "seek_data": false, 00:11:03.102 "copy": true, 00:11:03.102 "nvme_iov_md": false 00:11:03.102 }, 00:11:03.102 "memory_domains": [ 00:11:03.102 { 00:11:03.102 "dma_device_id": "system", 00:11:03.102 "dma_device_type": 1 00:11:03.102 }, 00:11:03.102 { 00:11:03.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.102 "dma_device_type": 2 00:11:03.102 } 00:11:03.102 ], 00:11:03.102 "driver_specific": {} 00:11:03.102 } 00:11:03.102 ] 00:11:03.102 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.102 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:03.102 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:03.102 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.102 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.102 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:03.102 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.102 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.102 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.102 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.102 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.102 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.102 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.102 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.102 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.102 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.102 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.102 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.102 "name": "Existed_Raid", 00:11:03.102 "uuid": "942d201f-3efe-403b-9076-24e7af361ff9", 00:11:03.102 "strip_size_kb": 64, 00:11:03.102 "state": "online", 00:11:03.102 "raid_level": "concat", 00:11:03.102 "superblock": false, 00:11:03.102 "num_base_bdevs": 4, 00:11:03.102 "num_base_bdevs_discovered": 4, 00:11:03.102 "num_base_bdevs_operational": 4, 00:11:03.102 "base_bdevs_list": [ 00:11:03.102 { 00:11:03.102 "name": "NewBaseBdev", 00:11:03.102 "uuid": "09db813a-a591-47ac-afc6-8e06238eb94a", 00:11:03.102 "is_configured": true, 00:11:03.102 "data_offset": 0, 00:11:03.102 "data_size": 65536 00:11:03.102 }, 00:11:03.102 { 00:11:03.102 "name": "BaseBdev2", 00:11:03.102 "uuid": "ad70116b-3cdd-46c0-872c-0f0c7cae3e9b", 00:11:03.102 "is_configured": true, 00:11:03.102 "data_offset": 0, 00:11:03.102 "data_size": 65536 00:11:03.102 }, 00:11:03.102 { 00:11:03.102 "name": "BaseBdev3", 00:11:03.102 "uuid": "666d0129-8352-4c6f-a803-2ae688ac7777", 00:11:03.102 "is_configured": true, 00:11:03.102 "data_offset": 0, 00:11:03.102 "data_size": 65536 00:11:03.102 }, 00:11:03.102 { 00:11:03.102 "name": "BaseBdev4", 00:11:03.102 "uuid": "fbf9dcbe-ed7c-475b-a8cc-121901dcf5f2", 00:11:03.102 "is_configured": true, 00:11:03.102 "data_offset": 0, 00:11:03.102 "data_size": 65536 00:11:03.102 } 00:11:03.102 ] 00:11:03.102 }' 00:11:03.102 09:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.102 09:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.737 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:03.737 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:03.737 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:03.737 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:03.737 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:03.737 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:03.737 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:03.737 09:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.737 09:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.737 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:03.737 [2024-10-15 09:10:21.350428] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:03.737 09:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.737 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:03.737 "name": "Existed_Raid", 00:11:03.737 "aliases": [ 00:11:03.737 "942d201f-3efe-403b-9076-24e7af361ff9" 00:11:03.737 ], 00:11:03.737 "product_name": "Raid Volume", 00:11:03.737 "block_size": 512, 00:11:03.737 "num_blocks": 262144, 00:11:03.737 "uuid": "942d201f-3efe-403b-9076-24e7af361ff9", 00:11:03.737 "assigned_rate_limits": { 00:11:03.737 "rw_ios_per_sec": 0, 00:11:03.737 "rw_mbytes_per_sec": 0, 00:11:03.737 "r_mbytes_per_sec": 0, 00:11:03.737 "w_mbytes_per_sec": 0 00:11:03.737 }, 00:11:03.737 "claimed": false, 00:11:03.737 "zoned": false, 00:11:03.737 "supported_io_types": { 00:11:03.737 "read": true, 00:11:03.737 "write": true, 00:11:03.737 "unmap": true, 00:11:03.737 "flush": true, 00:11:03.737 "reset": true, 00:11:03.737 "nvme_admin": false, 00:11:03.737 "nvme_io": false, 00:11:03.737 "nvme_io_md": false, 00:11:03.737 "write_zeroes": true, 00:11:03.737 "zcopy": false, 00:11:03.737 "get_zone_info": false, 00:11:03.737 "zone_management": false, 00:11:03.737 "zone_append": false, 00:11:03.737 "compare": false, 00:11:03.737 "compare_and_write": false, 00:11:03.737 "abort": false, 00:11:03.737 "seek_hole": false, 00:11:03.737 "seek_data": false, 00:11:03.737 "copy": false, 00:11:03.737 "nvme_iov_md": false 00:11:03.737 }, 00:11:03.737 "memory_domains": [ 00:11:03.737 { 00:11:03.737 "dma_device_id": "system", 00:11:03.737 "dma_device_type": 1 00:11:03.737 }, 00:11:03.737 { 00:11:03.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.737 "dma_device_type": 2 00:11:03.737 }, 00:11:03.737 { 00:11:03.737 "dma_device_id": "system", 00:11:03.737 "dma_device_type": 1 00:11:03.737 }, 00:11:03.737 { 00:11:03.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.737 "dma_device_type": 2 00:11:03.737 }, 00:11:03.737 { 00:11:03.737 "dma_device_id": "system", 00:11:03.737 "dma_device_type": 1 00:11:03.737 }, 00:11:03.737 { 00:11:03.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.737 "dma_device_type": 2 00:11:03.737 }, 00:11:03.737 { 00:11:03.737 "dma_device_id": "system", 00:11:03.737 "dma_device_type": 1 00:11:03.737 }, 00:11:03.737 { 00:11:03.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.737 "dma_device_type": 2 00:11:03.737 } 00:11:03.737 ], 00:11:03.737 "driver_specific": { 00:11:03.737 "raid": { 00:11:03.737 "uuid": "942d201f-3efe-403b-9076-24e7af361ff9", 00:11:03.737 "strip_size_kb": 64, 00:11:03.737 "state": "online", 00:11:03.737 "raid_level": "concat", 00:11:03.737 "superblock": false, 00:11:03.737 "num_base_bdevs": 4, 00:11:03.737 "num_base_bdevs_discovered": 4, 00:11:03.737 "num_base_bdevs_operational": 4, 00:11:03.738 "base_bdevs_list": [ 00:11:03.738 { 00:11:03.738 "name": "NewBaseBdev", 00:11:03.738 "uuid": "09db813a-a591-47ac-afc6-8e06238eb94a", 00:11:03.738 "is_configured": true, 00:11:03.738 "data_offset": 0, 00:11:03.738 "data_size": 65536 00:11:03.738 }, 00:11:03.738 { 00:11:03.738 "name": "BaseBdev2", 00:11:03.738 "uuid": "ad70116b-3cdd-46c0-872c-0f0c7cae3e9b", 00:11:03.738 "is_configured": true, 00:11:03.738 "data_offset": 0, 00:11:03.738 "data_size": 65536 00:11:03.738 }, 00:11:03.738 { 00:11:03.738 "name": "BaseBdev3", 00:11:03.738 "uuid": "666d0129-8352-4c6f-a803-2ae688ac7777", 00:11:03.738 "is_configured": true, 00:11:03.738 "data_offset": 0, 00:11:03.738 "data_size": 65536 00:11:03.738 }, 00:11:03.738 { 00:11:03.738 "name": "BaseBdev4", 00:11:03.738 "uuid": "fbf9dcbe-ed7c-475b-a8cc-121901dcf5f2", 00:11:03.738 "is_configured": true, 00:11:03.738 "data_offset": 0, 00:11:03.738 "data_size": 65536 00:11:03.738 } 00:11:03.738 ] 00:11:03.738 } 00:11:03.738 } 00:11:03.738 }' 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:03.738 BaseBdev2 00:11:03.738 BaseBdev3 00:11:03.738 BaseBdev4' 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.738 [2024-10-15 09:10:21.621557] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:03.738 [2024-10-15 09:10:21.621665] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:03.738 [2024-10-15 09:10:21.621787] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:03.738 [2024-10-15 09:10:21.621874] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:03.738 [2024-10-15 09:10:21.621886] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71380 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 71380 ']' 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 71380 00:11:03.738 09:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:03.998 09:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:03.998 09:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71380 00:11:03.998 09:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:03.998 killing process with pid 71380 00:11:03.998 09:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:03.998 09:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71380' 00:11:03.998 09:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 71380 00:11:03.998 [2024-10-15 09:10:21.676651] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:03.998 09:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 71380 00:11:04.566 [2024-10-15 09:10:22.160151] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:05.945 09:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:05.945 00:11:05.945 real 0m12.029s 00:11:05.945 user 0m18.819s 00:11:05.945 sys 0m2.113s 00:11:05.945 ************************************ 00:11:05.945 END TEST raid_state_function_test 00:11:05.945 ************************************ 00:11:05.945 09:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:05.945 09:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.945 09:10:23 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:05.945 09:10:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:05.945 09:10:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:05.945 09:10:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:05.945 ************************************ 00:11:05.945 START TEST raid_state_function_test_sb 00:11:05.945 ************************************ 00:11:05.945 09:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:11:05.945 09:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:05.945 09:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:05.945 09:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:05.945 09:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:05.945 09:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:05.945 09:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:05.945 09:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:05.945 09:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:05.945 09:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:05.945 09:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:05.945 09:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:05.945 09:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:05.945 09:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:05.945 09:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:05.945 09:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:05.945 09:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:05.945 09:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:05.945 09:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:05.945 09:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:05.945 09:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:05.946 09:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:05.946 09:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:05.946 09:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:05.946 09:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:05.946 09:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:05.946 09:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:05.946 09:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:05.946 09:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:05.946 09:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:05.946 09:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72062 00:11:05.946 09:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:05.946 09:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72062' 00:11:05.946 Process raid pid: 72062 00:11:05.946 09:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72062 00:11:05.946 09:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 72062 ']' 00:11:05.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.946 09:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.946 09:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:05.946 09:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.946 09:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:05.946 09:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.946 [2024-10-15 09:10:23.661724] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:11:05.946 [2024-10-15 09:10:23.661979] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:05.946 [2024-10-15 09:10:23.835876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.204 [2024-10-15 09:10:23.972680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.492 [2024-10-15 09:10:24.210391] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:06.492 [2024-10-15 09:10:24.210430] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:06.755 09:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:06.755 09:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:06.755 09:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:06.755 09:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.755 09:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.755 [2024-10-15 09:10:24.567102] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:06.755 [2024-10-15 09:10:24.567180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:06.755 [2024-10-15 09:10:24.567192] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:06.755 [2024-10-15 09:10:24.567203] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:06.755 [2024-10-15 09:10:24.567211] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:06.755 [2024-10-15 09:10:24.567221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:06.755 [2024-10-15 09:10:24.567228] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:06.755 [2024-10-15 09:10:24.567238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:06.755 09:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.755 09:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:06.755 09:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.755 09:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.755 09:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:06.755 09:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.755 09:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.755 09:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.755 09:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.755 09:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.755 09:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.755 09:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.755 09:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.755 09:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.755 09:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.755 09:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.755 09:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.755 "name": "Existed_Raid", 00:11:06.755 "uuid": "52feb742-dbf2-4934-983a-8bf03ac6a088", 00:11:06.755 "strip_size_kb": 64, 00:11:06.755 "state": "configuring", 00:11:06.755 "raid_level": "concat", 00:11:06.755 "superblock": true, 00:11:06.755 "num_base_bdevs": 4, 00:11:06.755 "num_base_bdevs_discovered": 0, 00:11:06.755 "num_base_bdevs_operational": 4, 00:11:06.755 "base_bdevs_list": [ 00:11:06.755 { 00:11:06.755 "name": "BaseBdev1", 00:11:06.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.755 "is_configured": false, 00:11:06.755 "data_offset": 0, 00:11:06.755 "data_size": 0 00:11:06.755 }, 00:11:06.755 { 00:11:06.755 "name": "BaseBdev2", 00:11:06.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.755 "is_configured": false, 00:11:06.755 "data_offset": 0, 00:11:06.755 "data_size": 0 00:11:06.755 }, 00:11:06.755 { 00:11:06.755 "name": "BaseBdev3", 00:11:06.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.755 "is_configured": false, 00:11:06.755 "data_offset": 0, 00:11:06.755 "data_size": 0 00:11:06.755 }, 00:11:06.755 { 00:11:06.755 "name": "BaseBdev4", 00:11:06.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.755 "is_configured": false, 00:11:06.755 "data_offset": 0, 00:11:06.755 "data_size": 0 00:11:06.755 } 00:11:06.755 ] 00:11:06.755 }' 00:11:06.755 09:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.755 09:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.324 [2024-10-15 09:10:25.070162] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:07.324 [2024-10-15 09:10:25.070336] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.324 [2024-10-15 09:10:25.082189] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:07.324 [2024-10-15 09:10:25.082361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:07.324 [2024-10-15 09:10:25.082412] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:07.324 [2024-10-15 09:10:25.082460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:07.324 [2024-10-15 09:10:25.082503] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:07.324 [2024-10-15 09:10:25.082550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:07.324 [2024-10-15 09:10:25.082593] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:07.324 [2024-10-15 09:10:25.082639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.324 [2024-10-15 09:10:25.137085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:07.324 BaseBdev1 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.324 [ 00:11:07.324 { 00:11:07.324 "name": "BaseBdev1", 00:11:07.324 "aliases": [ 00:11:07.324 "c8bab4e7-f932-428c-bd3b-af4939dc7a5f" 00:11:07.324 ], 00:11:07.324 "product_name": "Malloc disk", 00:11:07.324 "block_size": 512, 00:11:07.324 "num_blocks": 65536, 00:11:07.324 "uuid": "c8bab4e7-f932-428c-bd3b-af4939dc7a5f", 00:11:07.324 "assigned_rate_limits": { 00:11:07.324 "rw_ios_per_sec": 0, 00:11:07.324 "rw_mbytes_per_sec": 0, 00:11:07.324 "r_mbytes_per_sec": 0, 00:11:07.324 "w_mbytes_per_sec": 0 00:11:07.324 }, 00:11:07.324 "claimed": true, 00:11:07.324 "claim_type": "exclusive_write", 00:11:07.324 "zoned": false, 00:11:07.324 "supported_io_types": { 00:11:07.324 "read": true, 00:11:07.324 "write": true, 00:11:07.324 "unmap": true, 00:11:07.324 "flush": true, 00:11:07.324 "reset": true, 00:11:07.324 "nvme_admin": false, 00:11:07.324 "nvme_io": false, 00:11:07.324 "nvme_io_md": false, 00:11:07.324 "write_zeroes": true, 00:11:07.324 "zcopy": true, 00:11:07.324 "get_zone_info": false, 00:11:07.324 "zone_management": false, 00:11:07.324 "zone_append": false, 00:11:07.324 "compare": false, 00:11:07.324 "compare_and_write": false, 00:11:07.324 "abort": true, 00:11:07.324 "seek_hole": false, 00:11:07.324 "seek_data": false, 00:11:07.324 "copy": true, 00:11:07.324 "nvme_iov_md": false 00:11:07.324 }, 00:11:07.324 "memory_domains": [ 00:11:07.324 { 00:11:07.324 "dma_device_id": "system", 00:11:07.324 "dma_device_type": 1 00:11:07.324 }, 00:11:07.324 { 00:11:07.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.324 "dma_device_type": 2 00:11:07.324 } 00:11:07.324 ], 00:11:07.324 "driver_specific": {} 00:11:07.324 } 00:11:07.324 ] 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.324 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.584 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.584 "name": "Existed_Raid", 00:11:07.584 "uuid": "b7051507-7bc0-4ad3-a3e4-a78b25cc73bd", 00:11:07.584 "strip_size_kb": 64, 00:11:07.584 "state": "configuring", 00:11:07.584 "raid_level": "concat", 00:11:07.584 "superblock": true, 00:11:07.584 "num_base_bdevs": 4, 00:11:07.584 "num_base_bdevs_discovered": 1, 00:11:07.584 "num_base_bdevs_operational": 4, 00:11:07.584 "base_bdevs_list": [ 00:11:07.584 { 00:11:07.584 "name": "BaseBdev1", 00:11:07.584 "uuid": "c8bab4e7-f932-428c-bd3b-af4939dc7a5f", 00:11:07.584 "is_configured": true, 00:11:07.584 "data_offset": 2048, 00:11:07.584 "data_size": 63488 00:11:07.584 }, 00:11:07.584 { 00:11:07.584 "name": "BaseBdev2", 00:11:07.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.584 "is_configured": false, 00:11:07.584 "data_offset": 0, 00:11:07.584 "data_size": 0 00:11:07.584 }, 00:11:07.584 { 00:11:07.584 "name": "BaseBdev3", 00:11:07.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.584 "is_configured": false, 00:11:07.584 "data_offset": 0, 00:11:07.584 "data_size": 0 00:11:07.584 }, 00:11:07.584 { 00:11:07.584 "name": "BaseBdev4", 00:11:07.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.584 "is_configured": false, 00:11:07.584 "data_offset": 0, 00:11:07.584 "data_size": 0 00:11:07.584 } 00:11:07.584 ] 00:11:07.584 }' 00:11:07.584 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.584 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.843 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:07.843 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.843 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.843 [2024-10-15 09:10:25.624333] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:07.843 [2024-10-15 09:10:25.624489] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:07.843 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.843 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:07.843 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.843 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.843 [2024-10-15 09:10:25.636362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:07.843 [2024-10-15 09:10:25.638254] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:07.844 [2024-10-15 09:10:25.638338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:07.844 [2024-10-15 09:10:25.638355] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:07.844 [2024-10-15 09:10:25.638367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:07.844 [2024-10-15 09:10:25.638374] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:07.844 [2024-10-15 09:10:25.638382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:07.844 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.844 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:07.844 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:07.844 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:07.844 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.844 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.844 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:07.844 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.844 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.844 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.844 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.844 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.844 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.844 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.844 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.844 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.844 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.844 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.844 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.844 "name": "Existed_Raid", 00:11:07.844 "uuid": "56ecb33e-3d7f-4f43-ab19-40b8efef828b", 00:11:07.844 "strip_size_kb": 64, 00:11:07.844 "state": "configuring", 00:11:07.844 "raid_level": "concat", 00:11:07.844 "superblock": true, 00:11:07.844 "num_base_bdevs": 4, 00:11:07.844 "num_base_bdevs_discovered": 1, 00:11:07.844 "num_base_bdevs_operational": 4, 00:11:07.844 "base_bdevs_list": [ 00:11:07.844 { 00:11:07.844 "name": "BaseBdev1", 00:11:07.844 "uuid": "c8bab4e7-f932-428c-bd3b-af4939dc7a5f", 00:11:07.844 "is_configured": true, 00:11:07.844 "data_offset": 2048, 00:11:07.844 "data_size": 63488 00:11:07.844 }, 00:11:07.844 { 00:11:07.844 "name": "BaseBdev2", 00:11:07.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.844 "is_configured": false, 00:11:07.844 "data_offset": 0, 00:11:07.844 "data_size": 0 00:11:07.844 }, 00:11:07.844 { 00:11:07.844 "name": "BaseBdev3", 00:11:07.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.844 "is_configured": false, 00:11:07.844 "data_offset": 0, 00:11:07.844 "data_size": 0 00:11:07.844 }, 00:11:07.844 { 00:11:07.844 "name": "BaseBdev4", 00:11:07.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.844 "is_configured": false, 00:11:07.844 "data_offset": 0, 00:11:07.844 "data_size": 0 00:11:07.844 } 00:11:07.844 ] 00:11:07.844 }' 00:11:07.844 09:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.844 09:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.412 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:08.412 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.412 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.413 [2024-10-15 09:10:26.098119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:08.413 BaseBdev2 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.413 [ 00:11:08.413 { 00:11:08.413 "name": "BaseBdev2", 00:11:08.413 "aliases": [ 00:11:08.413 "6a72f779-d966-4fe8-916f-e5a55d9b5bc8" 00:11:08.413 ], 00:11:08.413 "product_name": "Malloc disk", 00:11:08.413 "block_size": 512, 00:11:08.413 "num_blocks": 65536, 00:11:08.413 "uuid": "6a72f779-d966-4fe8-916f-e5a55d9b5bc8", 00:11:08.413 "assigned_rate_limits": { 00:11:08.413 "rw_ios_per_sec": 0, 00:11:08.413 "rw_mbytes_per_sec": 0, 00:11:08.413 "r_mbytes_per_sec": 0, 00:11:08.413 "w_mbytes_per_sec": 0 00:11:08.413 }, 00:11:08.413 "claimed": true, 00:11:08.413 "claim_type": "exclusive_write", 00:11:08.413 "zoned": false, 00:11:08.413 "supported_io_types": { 00:11:08.413 "read": true, 00:11:08.413 "write": true, 00:11:08.413 "unmap": true, 00:11:08.413 "flush": true, 00:11:08.413 "reset": true, 00:11:08.413 "nvme_admin": false, 00:11:08.413 "nvme_io": false, 00:11:08.413 "nvme_io_md": false, 00:11:08.413 "write_zeroes": true, 00:11:08.413 "zcopy": true, 00:11:08.413 "get_zone_info": false, 00:11:08.413 "zone_management": false, 00:11:08.413 "zone_append": false, 00:11:08.413 "compare": false, 00:11:08.413 "compare_and_write": false, 00:11:08.413 "abort": true, 00:11:08.413 "seek_hole": false, 00:11:08.413 "seek_data": false, 00:11:08.413 "copy": true, 00:11:08.413 "nvme_iov_md": false 00:11:08.413 }, 00:11:08.413 "memory_domains": [ 00:11:08.413 { 00:11:08.413 "dma_device_id": "system", 00:11:08.413 "dma_device_type": 1 00:11:08.413 }, 00:11:08.413 { 00:11:08.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.413 "dma_device_type": 2 00:11:08.413 } 00:11:08.413 ], 00:11:08.413 "driver_specific": {} 00:11:08.413 } 00:11:08.413 ] 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.413 "name": "Existed_Raid", 00:11:08.413 "uuid": "56ecb33e-3d7f-4f43-ab19-40b8efef828b", 00:11:08.413 "strip_size_kb": 64, 00:11:08.413 "state": "configuring", 00:11:08.413 "raid_level": "concat", 00:11:08.413 "superblock": true, 00:11:08.413 "num_base_bdevs": 4, 00:11:08.413 "num_base_bdevs_discovered": 2, 00:11:08.413 "num_base_bdevs_operational": 4, 00:11:08.413 "base_bdevs_list": [ 00:11:08.413 { 00:11:08.413 "name": "BaseBdev1", 00:11:08.413 "uuid": "c8bab4e7-f932-428c-bd3b-af4939dc7a5f", 00:11:08.413 "is_configured": true, 00:11:08.413 "data_offset": 2048, 00:11:08.413 "data_size": 63488 00:11:08.413 }, 00:11:08.413 { 00:11:08.413 "name": "BaseBdev2", 00:11:08.413 "uuid": "6a72f779-d966-4fe8-916f-e5a55d9b5bc8", 00:11:08.413 "is_configured": true, 00:11:08.413 "data_offset": 2048, 00:11:08.413 "data_size": 63488 00:11:08.413 }, 00:11:08.413 { 00:11:08.413 "name": "BaseBdev3", 00:11:08.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.413 "is_configured": false, 00:11:08.413 "data_offset": 0, 00:11:08.413 "data_size": 0 00:11:08.413 }, 00:11:08.413 { 00:11:08.413 "name": "BaseBdev4", 00:11:08.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.413 "is_configured": false, 00:11:08.413 "data_offset": 0, 00:11:08.413 "data_size": 0 00:11:08.413 } 00:11:08.413 ] 00:11:08.413 }' 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.413 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.673 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:08.673 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.673 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.933 [2024-10-15 09:10:26.603175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:08.933 BaseBdev3 00:11:08.933 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.933 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:08.933 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:08.933 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:08.933 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:08.933 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:08.933 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:08.933 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:08.933 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.933 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.933 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.933 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:08.933 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.933 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.933 [ 00:11:08.933 { 00:11:08.933 "name": "BaseBdev3", 00:11:08.933 "aliases": [ 00:11:08.933 "e5761cca-9499-49a7-b4cf-b65225f04437" 00:11:08.933 ], 00:11:08.933 "product_name": "Malloc disk", 00:11:08.933 "block_size": 512, 00:11:08.933 "num_blocks": 65536, 00:11:08.933 "uuid": "e5761cca-9499-49a7-b4cf-b65225f04437", 00:11:08.933 "assigned_rate_limits": { 00:11:08.933 "rw_ios_per_sec": 0, 00:11:08.933 "rw_mbytes_per_sec": 0, 00:11:08.933 "r_mbytes_per_sec": 0, 00:11:08.933 "w_mbytes_per_sec": 0 00:11:08.933 }, 00:11:08.933 "claimed": true, 00:11:08.933 "claim_type": "exclusive_write", 00:11:08.933 "zoned": false, 00:11:08.933 "supported_io_types": { 00:11:08.933 "read": true, 00:11:08.933 "write": true, 00:11:08.933 "unmap": true, 00:11:08.933 "flush": true, 00:11:08.933 "reset": true, 00:11:08.933 "nvme_admin": false, 00:11:08.933 "nvme_io": false, 00:11:08.933 "nvme_io_md": false, 00:11:08.933 "write_zeroes": true, 00:11:08.933 "zcopy": true, 00:11:08.933 "get_zone_info": false, 00:11:08.933 "zone_management": false, 00:11:08.933 "zone_append": false, 00:11:08.933 "compare": false, 00:11:08.933 "compare_and_write": false, 00:11:08.933 "abort": true, 00:11:08.933 "seek_hole": false, 00:11:08.933 "seek_data": false, 00:11:08.933 "copy": true, 00:11:08.933 "nvme_iov_md": false 00:11:08.933 }, 00:11:08.933 "memory_domains": [ 00:11:08.933 { 00:11:08.933 "dma_device_id": "system", 00:11:08.933 "dma_device_type": 1 00:11:08.933 }, 00:11:08.933 { 00:11:08.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.933 "dma_device_type": 2 00:11:08.933 } 00:11:08.933 ], 00:11:08.933 "driver_specific": {} 00:11:08.933 } 00:11:08.933 ] 00:11:08.933 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.933 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:08.933 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:08.933 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:08.933 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:08.933 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.933 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.933 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.933 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.933 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.933 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.933 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.933 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.933 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.933 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.933 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.933 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.934 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.934 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.934 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.934 "name": "Existed_Raid", 00:11:08.934 "uuid": "56ecb33e-3d7f-4f43-ab19-40b8efef828b", 00:11:08.934 "strip_size_kb": 64, 00:11:08.934 "state": "configuring", 00:11:08.934 "raid_level": "concat", 00:11:08.934 "superblock": true, 00:11:08.934 "num_base_bdevs": 4, 00:11:08.934 "num_base_bdevs_discovered": 3, 00:11:08.934 "num_base_bdevs_operational": 4, 00:11:08.934 "base_bdevs_list": [ 00:11:08.934 { 00:11:08.934 "name": "BaseBdev1", 00:11:08.934 "uuid": "c8bab4e7-f932-428c-bd3b-af4939dc7a5f", 00:11:08.934 "is_configured": true, 00:11:08.934 "data_offset": 2048, 00:11:08.934 "data_size": 63488 00:11:08.934 }, 00:11:08.934 { 00:11:08.934 "name": "BaseBdev2", 00:11:08.934 "uuid": "6a72f779-d966-4fe8-916f-e5a55d9b5bc8", 00:11:08.934 "is_configured": true, 00:11:08.934 "data_offset": 2048, 00:11:08.934 "data_size": 63488 00:11:08.934 }, 00:11:08.934 { 00:11:08.934 "name": "BaseBdev3", 00:11:08.934 "uuid": "e5761cca-9499-49a7-b4cf-b65225f04437", 00:11:08.934 "is_configured": true, 00:11:08.934 "data_offset": 2048, 00:11:08.934 "data_size": 63488 00:11:08.934 }, 00:11:08.934 { 00:11:08.934 "name": "BaseBdev4", 00:11:08.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.934 "is_configured": false, 00:11:08.934 "data_offset": 0, 00:11:08.934 "data_size": 0 00:11:08.934 } 00:11:08.934 ] 00:11:08.934 }' 00:11:08.934 09:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.934 09:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.193 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:09.193 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.193 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.453 [2024-10-15 09:10:27.109258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:09.453 [2024-10-15 09:10:27.109558] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:09.453 [2024-10-15 09:10:27.109574] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:09.453 [2024-10-15 09:10:27.109894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:09.453 [2024-10-15 09:10:27.110082] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:09.453 [2024-10-15 09:10:27.110095] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:09.453 BaseBdev4 00:11:09.453 [2024-10-15 09:10:27.110246] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.453 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.453 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:09.453 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:09.453 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:09.453 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:09.453 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:09.453 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:09.453 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:09.453 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.453 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.453 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.453 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:09.453 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.453 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.453 [ 00:11:09.453 { 00:11:09.453 "name": "BaseBdev4", 00:11:09.453 "aliases": [ 00:11:09.453 "8fdb5629-eb6e-44fc-a704-6184e52649a6" 00:11:09.453 ], 00:11:09.453 "product_name": "Malloc disk", 00:11:09.453 "block_size": 512, 00:11:09.453 "num_blocks": 65536, 00:11:09.453 "uuid": "8fdb5629-eb6e-44fc-a704-6184e52649a6", 00:11:09.453 "assigned_rate_limits": { 00:11:09.453 "rw_ios_per_sec": 0, 00:11:09.453 "rw_mbytes_per_sec": 0, 00:11:09.453 "r_mbytes_per_sec": 0, 00:11:09.453 "w_mbytes_per_sec": 0 00:11:09.453 }, 00:11:09.453 "claimed": true, 00:11:09.453 "claim_type": "exclusive_write", 00:11:09.453 "zoned": false, 00:11:09.453 "supported_io_types": { 00:11:09.453 "read": true, 00:11:09.453 "write": true, 00:11:09.453 "unmap": true, 00:11:09.453 "flush": true, 00:11:09.453 "reset": true, 00:11:09.453 "nvme_admin": false, 00:11:09.453 "nvme_io": false, 00:11:09.453 "nvme_io_md": false, 00:11:09.453 "write_zeroes": true, 00:11:09.453 "zcopy": true, 00:11:09.453 "get_zone_info": false, 00:11:09.453 "zone_management": false, 00:11:09.453 "zone_append": false, 00:11:09.453 "compare": false, 00:11:09.453 "compare_and_write": false, 00:11:09.453 "abort": true, 00:11:09.453 "seek_hole": false, 00:11:09.453 "seek_data": false, 00:11:09.453 "copy": true, 00:11:09.453 "nvme_iov_md": false 00:11:09.453 }, 00:11:09.453 "memory_domains": [ 00:11:09.453 { 00:11:09.453 "dma_device_id": "system", 00:11:09.453 "dma_device_type": 1 00:11:09.453 }, 00:11:09.453 { 00:11:09.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.453 "dma_device_type": 2 00:11:09.453 } 00:11:09.453 ], 00:11:09.453 "driver_specific": {} 00:11:09.453 } 00:11:09.453 ] 00:11:09.453 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.453 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:09.453 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:09.453 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:09.453 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:09.453 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.453 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.453 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.453 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.453 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.453 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.453 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.453 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.454 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.454 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.454 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.454 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.454 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.454 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.454 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.454 "name": "Existed_Raid", 00:11:09.454 "uuid": "56ecb33e-3d7f-4f43-ab19-40b8efef828b", 00:11:09.454 "strip_size_kb": 64, 00:11:09.454 "state": "online", 00:11:09.454 "raid_level": "concat", 00:11:09.454 "superblock": true, 00:11:09.454 "num_base_bdevs": 4, 00:11:09.454 "num_base_bdevs_discovered": 4, 00:11:09.454 "num_base_bdevs_operational": 4, 00:11:09.454 "base_bdevs_list": [ 00:11:09.454 { 00:11:09.454 "name": "BaseBdev1", 00:11:09.454 "uuid": "c8bab4e7-f932-428c-bd3b-af4939dc7a5f", 00:11:09.454 "is_configured": true, 00:11:09.454 "data_offset": 2048, 00:11:09.454 "data_size": 63488 00:11:09.454 }, 00:11:09.454 { 00:11:09.454 "name": "BaseBdev2", 00:11:09.454 "uuid": "6a72f779-d966-4fe8-916f-e5a55d9b5bc8", 00:11:09.454 "is_configured": true, 00:11:09.454 "data_offset": 2048, 00:11:09.454 "data_size": 63488 00:11:09.454 }, 00:11:09.454 { 00:11:09.454 "name": "BaseBdev3", 00:11:09.454 "uuid": "e5761cca-9499-49a7-b4cf-b65225f04437", 00:11:09.454 "is_configured": true, 00:11:09.454 "data_offset": 2048, 00:11:09.454 "data_size": 63488 00:11:09.454 }, 00:11:09.454 { 00:11:09.454 "name": "BaseBdev4", 00:11:09.454 "uuid": "8fdb5629-eb6e-44fc-a704-6184e52649a6", 00:11:09.454 "is_configured": true, 00:11:09.454 "data_offset": 2048, 00:11:09.454 "data_size": 63488 00:11:09.454 } 00:11:09.454 ] 00:11:09.454 }' 00:11:09.454 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.454 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:10.022 [2024-10-15 09:10:27.628888] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:10.022 "name": "Existed_Raid", 00:11:10.022 "aliases": [ 00:11:10.022 "56ecb33e-3d7f-4f43-ab19-40b8efef828b" 00:11:10.022 ], 00:11:10.022 "product_name": "Raid Volume", 00:11:10.022 "block_size": 512, 00:11:10.022 "num_blocks": 253952, 00:11:10.022 "uuid": "56ecb33e-3d7f-4f43-ab19-40b8efef828b", 00:11:10.022 "assigned_rate_limits": { 00:11:10.022 "rw_ios_per_sec": 0, 00:11:10.022 "rw_mbytes_per_sec": 0, 00:11:10.022 "r_mbytes_per_sec": 0, 00:11:10.022 "w_mbytes_per_sec": 0 00:11:10.022 }, 00:11:10.022 "claimed": false, 00:11:10.022 "zoned": false, 00:11:10.022 "supported_io_types": { 00:11:10.022 "read": true, 00:11:10.022 "write": true, 00:11:10.022 "unmap": true, 00:11:10.022 "flush": true, 00:11:10.022 "reset": true, 00:11:10.022 "nvme_admin": false, 00:11:10.022 "nvme_io": false, 00:11:10.022 "nvme_io_md": false, 00:11:10.022 "write_zeroes": true, 00:11:10.022 "zcopy": false, 00:11:10.022 "get_zone_info": false, 00:11:10.022 "zone_management": false, 00:11:10.022 "zone_append": false, 00:11:10.022 "compare": false, 00:11:10.022 "compare_and_write": false, 00:11:10.022 "abort": false, 00:11:10.022 "seek_hole": false, 00:11:10.022 "seek_data": false, 00:11:10.022 "copy": false, 00:11:10.022 "nvme_iov_md": false 00:11:10.022 }, 00:11:10.022 "memory_domains": [ 00:11:10.022 { 00:11:10.022 "dma_device_id": "system", 00:11:10.022 "dma_device_type": 1 00:11:10.022 }, 00:11:10.022 { 00:11:10.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.022 "dma_device_type": 2 00:11:10.022 }, 00:11:10.022 { 00:11:10.022 "dma_device_id": "system", 00:11:10.022 "dma_device_type": 1 00:11:10.022 }, 00:11:10.022 { 00:11:10.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.022 "dma_device_type": 2 00:11:10.022 }, 00:11:10.022 { 00:11:10.022 "dma_device_id": "system", 00:11:10.022 "dma_device_type": 1 00:11:10.022 }, 00:11:10.022 { 00:11:10.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.022 "dma_device_type": 2 00:11:10.022 }, 00:11:10.022 { 00:11:10.022 "dma_device_id": "system", 00:11:10.022 "dma_device_type": 1 00:11:10.022 }, 00:11:10.022 { 00:11:10.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.022 "dma_device_type": 2 00:11:10.022 } 00:11:10.022 ], 00:11:10.022 "driver_specific": { 00:11:10.022 "raid": { 00:11:10.022 "uuid": "56ecb33e-3d7f-4f43-ab19-40b8efef828b", 00:11:10.022 "strip_size_kb": 64, 00:11:10.022 "state": "online", 00:11:10.022 "raid_level": "concat", 00:11:10.022 "superblock": true, 00:11:10.022 "num_base_bdevs": 4, 00:11:10.022 "num_base_bdevs_discovered": 4, 00:11:10.022 "num_base_bdevs_operational": 4, 00:11:10.022 "base_bdevs_list": [ 00:11:10.022 { 00:11:10.022 "name": "BaseBdev1", 00:11:10.022 "uuid": "c8bab4e7-f932-428c-bd3b-af4939dc7a5f", 00:11:10.022 "is_configured": true, 00:11:10.022 "data_offset": 2048, 00:11:10.022 "data_size": 63488 00:11:10.022 }, 00:11:10.022 { 00:11:10.022 "name": "BaseBdev2", 00:11:10.022 "uuid": "6a72f779-d966-4fe8-916f-e5a55d9b5bc8", 00:11:10.022 "is_configured": true, 00:11:10.022 "data_offset": 2048, 00:11:10.022 "data_size": 63488 00:11:10.022 }, 00:11:10.022 { 00:11:10.022 "name": "BaseBdev3", 00:11:10.022 "uuid": "e5761cca-9499-49a7-b4cf-b65225f04437", 00:11:10.022 "is_configured": true, 00:11:10.022 "data_offset": 2048, 00:11:10.022 "data_size": 63488 00:11:10.022 }, 00:11:10.022 { 00:11:10.022 "name": "BaseBdev4", 00:11:10.022 "uuid": "8fdb5629-eb6e-44fc-a704-6184e52649a6", 00:11:10.022 "is_configured": true, 00:11:10.022 "data_offset": 2048, 00:11:10.022 "data_size": 63488 00:11:10.022 } 00:11:10.022 ] 00:11:10.022 } 00:11:10.022 } 00:11:10.022 }' 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:10.022 BaseBdev2 00:11:10.022 BaseBdev3 00:11:10.022 BaseBdev4' 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.022 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.281 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.281 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.281 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.281 09:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:10.281 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.281 09:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.281 [2024-10-15 09:10:27.964016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:10.281 [2024-10-15 09:10:27.964153] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:10.281 [2024-10-15 09:10:27.964231] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:10.281 09:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.282 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:10.282 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:10.282 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:10.282 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:10.282 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:10.282 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:10.282 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.282 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:10.282 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.282 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.282 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:10.282 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.282 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.282 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.282 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.282 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.282 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.282 09:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.282 09:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.282 09:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.282 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.282 "name": "Existed_Raid", 00:11:10.282 "uuid": "56ecb33e-3d7f-4f43-ab19-40b8efef828b", 00:11:10.282 "strip_size_kb": 64, 00:11:10.282 "state": "offline", 00:11:10.282 "raid_level": "concat", 00:11:10.282 "superblock": true, 00:11:10.282 "num_base_bdevs": 4, 00:11:10.282 "num_base_bdevs_discovered": 3, 00:11:10.282 "num_base_bdevs_operational": 3, 00:11:10.282 "base_bdevs_list": [ 00:11:10.282 { 00:11:10.282 "name": null, 00:11:10.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.282 "is_configured": false, 00:11:10.282 "data_offset": 0, 00:11:10.282 "data_size": 63488 00:11:10.282 }, 00:11:10.282 { 00:11:10.282 "name": "BaseBdev2", 00:11:10.282 "uuid": "6a72f779-d966-4fe8-916f-e5a55d9b5bc8", 00:11:10.282 "is_configured": true, 00:11:10.282 "data_offset": 2048, 00:11:10.282 "data_size": 63488 00:11:10.282 }, 00:11:10.282 { 00:11:10.282 "name": "BaseBdev3", 00:11:10.282 "uuid": "e5761cca-9499-49a7-b4cf-b65225f04437", 00:11:10.282 "is_configured": true, 00:11:10.282 "data_offset": 2048, 00:11:10.282 "data_size": 63488 00:11:10.282 }, 00:11:10.282 { 00:11:10.282 "name": "BaseBdev4", 00:11:10.282 "uuid": "8fdb5629-eb6e-44fc-a704-6184e52649a6", 00:11:10.282 "is_configured": true, 00:11:10.282 "data_offset": 2048, 00:11:10.282 "data_size": 63488 00:11:10.282 } 00:11:10.282 ] 00:11:10.282 }' 00:11:10.282 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.282 09:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.850 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:10.850 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:10.850 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:10.850 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.850 09:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.850 09:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.850 09:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.850 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:10.850 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:10.850 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:10.850 09:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.850 09:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.850 [2024-10-15 09:10:28.563216] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:10.850 09:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.850 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:10.850 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:10.850 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.850 09:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.850 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:10.850 09:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.850 09:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.850 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:10.850 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:10.850 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:10.850 09:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.850 09:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.850 [2024-10-15 09:10:28.724965] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:11.108 09:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.108 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:11.108 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:11.108 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.108 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:11.108 09:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.108 09:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.108 09:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.109 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:11.109 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:11.109 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:11.109 09:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.109 09:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.109 [2024-10-15 09:10:28.884672] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:11.109 [2024-10-15 09:10:28.884738] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:11.109 09:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.109 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:11.109 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:11.109 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.109 09:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.109 09:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:11.109 09:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.109 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.368 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:11.368 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:11.368 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:11.368 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:11.368 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:11.368 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:11.368 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.368 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.368 BaseBdev2 00:11:11.368 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.368 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:11.368 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:11.368 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:11.368 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:11.368 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:11.368 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:11.368 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:11.368 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.368 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.368 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.368 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:11.368 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.368 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.368 [ 00:11:11.368 { 00:11:11.368 "name": "BaseBdev2", 00:11:11.368 "aliases": [ 00:11:11.368 "cd5cac0d-27c4-4ed3-a2e2-91b0b38108fe" 00:11:11.368 ], 00:11:11.369 "product_name": "Malloc disk", 00:11:11.369 "block_size": 512, 00:11:11.369 "num_blocks": 65536, 00:11:11.369 "uuid": "cd5cac0d-27c4-4ed3-a2e2-91b0b38108fe", 00:11:11.369 "assigned_rate_limits": { 00:11:11.369 "rw_ios_per_sec": 0, 00:11:11.369 "rw_mbytes_per_sec": 0, 00:11:11.369 "r_mbytes_per_sec": 0, 00:11:11.369 "w_mbytes_per_sec": 0 00:11:11.369 }, 00:11:11.369 "claimed": false, 00:11:11.369 "zoned": false, 00:11:11.369 "supported_io_types": { 00:11:11.369 "read": true, 00:11:11.369 "write": true, 00:11:11.369 "unmap": true, 00:11:11.369 "flush": true, 00:11:11.369 "reset": true, 00:11:11.369 "nvme_admin": false, 00:11:11.369 "nvme_io": false, 00:11:11.369 "nvme_io_md": false, 00:11:11.369 "write_zeroes": true, 00:11:11.369 "zcopy": true, 00:11:11.369 "get_zone_info": false, 00:11:11.369 "zone_management": false, 00:11:11.369 "zone_append": false, 00:11:11.369 "compare": false, 00:11:11.369 "compare_and_write": false, 00:11:11.369 "abort": true, 00:11:11.369 "seek_hole": false, 00:11:11.369 "seek_data": false, 00:11:11.369 "copy": true, 00:11:11.369 "nvme_iov_md": false 00:11:11.369 }, 00:11:11.369 "memory_domains": [ 00:11:11.369 { 00:11:11.369 "dma_device_id": "system", 00:11:11.369 "dma_device_type": 1 00:11:11.369 }, 00:11:11.369 { 00:11:11.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.369 "dma_device_type": 2 00:11:11.369 } 00:11:11.369 ], 00:11:11.369 "driver_specific": {} 00:11:11.369 } 00:11:11.369 ] 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.369 BaseBdev3 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.369 [ 00:11:11.369 { 00:11:11.369 "name": "BaseBdev3", 00:11:11.369 "aliases": [ 00:11:11.369 "b88758c2-5ded-4b82-8d21-a2f0db2dfb57" 00:11:11.369 ], 00:11:11.369 "product_name": "Malloc disk", 00:11:11.369 "block_size": 512, 00:11:11.369 "num_blocks": 65536, 00:11:11.369 "uuid": "b88758c2-5ded-4b82-8d21-a2f0db2dfb57", 00:11:11.369 "assigned_rate_limits": { 00:11:11.369 "rw_ios_per_sec": 0, 00:11:11.369 "rw_mbytes_per_sec": 0, 00:11:11.369 "r_mbytes_per_sec": 0, 00:11:11.369 "w_mbytes_per_sec": 0 00:11:11.369 }, 00:11:11.369 "claimed": false, 00:11:11.369 "zoned": false, 00:11:11.369 "supported_io_types": { 00:11:11.369 "read": true, 00:11:11.369 "write": true, 00:11:11.369 "unmap": true, 00:11:11.369 "flush": true, 00:11:11.369 "reset": true, 00:11:11.369 "nvme_admin": false, 00:11:11.369 "nvme_io": false, 00:11:11.369 "nvme_io_md": false, 00:11:11.369 "write_zeroes": true, 00:11:11.369 "zcopy": true, 00:11:11.369 "get_zone_info": false, 00:11:11.369 "zone_management": false, 00:11:11.369 "zone_append": false, 00:11:11.369 "compare": false, 00:11:11.369 "compare_and_write": false, 00:11:11.369 "abort": true, 00:11:11.369 "seek_hole": false, 00:11:11.369 "seek_data": false, 00:11:11.369 "copy": true, 00:11:11.369 "nvme_iov_md": false 00:11:11.369 }, 00:11:11.369 "memory_domains": [ 00:11:11.369 { 00:11:11.369 "dma_device_id": "system", 00:11:11.369 "dma_device_type": 1 00:11:11.369 }, 00:11:11.369 { 00:11:11.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.369 "dma_device_type": 2 00:11:11.369 } 00:11:11.369 ], 00:11:11.369 "driver_specific": {} 00:11:11.369 } 00:11:11.369 ] 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.369 BaseBdev4 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.369 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.628 [ 00:11:11.628 { 00:11:11.628 "name": "BaseBdev4", 00:11:11.628 "aliases": [ 00:11:11.628 "55cd9f51-1ffd-4ab9-a989-e0c7763349ef" 00:11:11.628 ], 00:11:11.628 "product_name": "Malloc disk", 00:11:11.628 "block_size": 512, 00:11:11.628 "num_blocks": 65536, 00:11:11.628 "uuid": "55cd9f51-1ffd-4ab9-a989-e0c7763349ef", 00:11:11.628 "assigned_rate_limits": { 00:11:11.628 "rw_ios_per_sec": 0, 00:11:11.628 "rw_mbytes_per_sec": 0, 00:11:11.628 "r_mbytes_per_sec": 0, 00:11:11.628 "w_mbytes_per_sec": 0 00:11:11.628 }, 00:11:11.628 "claimed": false, 00:11:11.628 "zoned": false, 00:11:11.628 "supported_io_types": { 00:11:11.628 "read": true, 00:11:11.628 "write": true, 00:11:11.628 "unmap": true, 00:11:11.628 "flush": true, 00:11:11.628 "reset": true, 00:11:11.628 "nvme_admin": false, 00:11:11.628 "nvme_io": false, 00:11:11.628 "nvme_io_md": false, 00:11:11.628 "write_zeroes": true, 00:11:11.628 "zcopy": true, 00:11:11.628 "get_zone_info": false, 00:11:11.628 "zone_management": false, 00:11:11.628 "zone_append": false, 00:11:11.628 "compare": false, 00:11:11.628 "compare_and_write": false, 00:11:11.628 "abort": true, 00:11:11.628 "seek_hole": false, 00:11:11.628 "seek_data": false, 00:11:11.628 "copy": true, 00:11:11.628 "nvme_iov_md": false 00:11:11.628 }, 00:11:11.628 "memory_domains": [ 00:11:11.628 { 00:11:11.628 "dma_device_id": "system", 00:11:11.628 "dma_device_type": 1 00:11:11.628 }, 00:11:11.628 { 00:11:11.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.628 "dma_device_type": 2 00:11:11.628 } 00:11:11.628 ], 00:11:11.628 "driver_specific": {} 00:11:11.628 } 00:11:11.628 ] 00:11:11.628 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.628 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:11.628 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:11.628 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:11.628 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:11.628 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.628 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.628 [2024-10-15 09:10:29.287323] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:11.629 [2024-10-15 09:10:29.287375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:11.629 [2024-10-15 09:10:29.287403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:11.629 [2024-10-15 09:10:29.289382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:11.629 [2024-10-15 09:10:29.289493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:11.629 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.629 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:11.629 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.629 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.629 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.629 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.629 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.629 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.629 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.629 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.629 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.629 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.629 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.629 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.629 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.629 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.629 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.629 "name": "Existed_Raid", 00:11:11.629 "uuid": "2e7b89c9-3530-4247-a514-688d9f59bc14", 00:11:11.629 "strip_size_kb": 64, 00:11:11.629 "state": "configuring", 00:11:11.629 "raid_level": "concat", 00:11:11.629 "superblock": true, 00:11:11.629 "num_base_bdevs": 4, 00:11:11.629 "num_base_bdevs_discovered": 3, 00:11:11.629 "num_base_bdevs_operational": 4, 00:11:11.629 "base_bdevs_list": [ 00:11:11.629 { 00:11:11.629 "name": "BaseBdev1", 00:11:11.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.629 "is_configured": false, 00:11:11.629 "data_offset": 0, 00:11:11.629 "data_size": 0 00:11:11.629 }, 00:11:11.629 { 00:11:11.629 "name": "BaseBdev2", 00:11:11.629 "uuid": "cd5cac0d-27c4-4ed3-a2e2-91b0b38108fe", 00:11:11.629 "is_configured": true, 00:11:11.629 "data_offset": 2048, 00:11:11.629 "data_size": 63488 00:11:11.629 }, 00:11:11.629 { 00:11:11.629 "name": "BaseBdev3", 00:11:11.629 "uuid": "b88758c2-5ded-4b82-8d21-a2f0db2dfb57", 00:11:11.629 "is_configured": true, 00:11:11.629 "data_offset": 2048, 00:11:11.629 "data_size": 63488 00:11:11.629 }, 00:11:11.629 { 00:11:11.629 "name": "BaseBdev4", 00:11:11.629 "uuid": "55cd9f51-1ffd-4ab9-a989-e0c7763349ef", 00:11:11.629 "is_configured": true, 00:11:11.629 "data_offset": 2048, 00:11:11.629 "data_size": 63488 00:11:11.629 } 00:11:11.629 ] 00:11:11.629 }' 00:11:11.629 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.629 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.888 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:11.888 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.888 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.888 [2024-10-15 09:10:29.746592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:11.888 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.888 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:11.888 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.888 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.888 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.888 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.888 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.888 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.888 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.888 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.888 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.888 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.888 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.888 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.888 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.888 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.162 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.162 "name": "Existed_Raid", 00:11:12.162 "uuid": "2e7b89c9-3530-4247-a514-688d9f59bc14", 00:11:12.162 "strip_size_kb": 64, 00:11:12.162 "state": "configuring", 00:11:12.162 "raid_level": "concat", 00:11:12.162 "superblock": true, 00:11:12.162 "num_base_bdevs": 4, 00:11:12.162 "num_base_bdevs_discovered": 2, 00:11:12.162 "num_base_bdevs_operational": 4, 00:11:12.162 "base_bdevs_list": [ 00:11:12.162 { 00:11:12.162 "name": "BaseBdev1", 00:11:12.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.162 "is_configured": false, 00:11:12.162 "data_offset": 0, 00:11:12.162 "data_size": 0 00:11:12.162 }, 00:11:12.162 { 00:11:12.162 "name": null, 00:11:12.162 "uuid": "cd5cac0d-27c4-4ed3-a2e2-91b0b38108fe", 00:11:12.162 "is_configured": false, 00:11:12.162 "data_offset": 0, 00:11:12.162 "data_size": 63488 00:11:12.162 }, 00:11:12.162 { 00:11:12.162 "name": "BaseBdev3", 00:11:12.162 "uuid": "b88758c2-5ded-4b82-8d21-a2f0db2dfb57", 00:11:12.162 "is_configured": true, 00:11:12.162 "data_offset": 2048, 00:11:12.162 "data_size": 63488 00:11:12.162 }, 00:11:12.162 { 00:11:12.162 "name": "BaseBdev4", 00:11:12.162 "uuid": "55cd9f51-1ffd-4ab9-a989-e0c7763349ef", 00:11:12.162 "is_configured": true, 00:11:12.162 "data_offset": 2048, 00:11:12.162 "data_size": 63488 00:11:12.162 } 00:11:12.162 ] 00:11:12.162 }' 00:11:12.162 09:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.162 09:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.457 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.457 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:12.457 09:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.457 09:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.457 09:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.457 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:12.457 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:12.457 09:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.457 09:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.457 [2024-10-15 09:10:30.286823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:12.457 BaseBdev1 00:11:12.457 09:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.457 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:12.457 09:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:12.457 09:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:12.457 09:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:12.457 09:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:12.457 09:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:12.457 09:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:12.457 09:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.457 09:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.457 09:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.457 09:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:12.457 09:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.457 09:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.457 [ 00:11:12.457 { 00:11:12.457 "name": "BaseBdev1", 00:11:12.457 "aliases": [ 00:11:12.457 "86c8a5eb-c774-4ef4-8511-e98d557c1478" 00:11:12.457 ], 00:11:12.457 "product_name": "Malloc disk", 00:11:12.457 "block_size": 512, 00:11:12.457 "num_blocks": 65536, 00:11:12.457 "uuid": "86c8a5eb-c774-4ef4-8511-e98d557c1478", 00:11:12.457 "assigned_rate_limits": { 00:11:12.457 "rw_ios_per_sec": 0, 00:11:12.457 "rw_mbytes_per_sec": 0, 00:11:12.457 "r_mbytes_per_sec": 0, 00:11:12.457 "w_mbytes_per_sec": 0 00:11:12.457 }, 00:11:12.457 "claimed": true, 00:11:12.457 "claim_type": "exclusive_write", 00:11:12.457 "zoned": false, 00:11:12.457 "supported_io_types": { 00:11:12.457 "read": true, 00:11:12.457 "write": true, 00:11:12.457 "unmap": true, 00:11:12.457 "flush": true, 00:11:12.457 "reset": true, 00:11:12.457 "nvme_admin": false, 00:11:12.457 "nvme_io": false, 00:11:12.457 "nvme_io_md": false, 00:11:12.457 "write_zeroes": true, 00:11:12.457 "zcopy": true, 00:11:12.457 "get_zone_info": false, 00:11:12.457 "zone_management": false, 00:11:12.457 "zone_append": false, 00:11:12.457 "compare": false, 00:11:12.457 "compare_and_write": false, 00:11:12.457 "abort": true, 00:11:12.457 "seek_hole": false, 00:11:12.457 "seek_data": false, 00:11:12.457 "copy": true, 00:11:12.457 "nvme_iov_md": false 00:11:12.457 }, 00:11:12.457 "memory_domains": [ 00:11:12.457 { 00:11:12.457 "dma_device_id": "system", 00:11:12.458 "dma_device_type": 1 00:11:12.458 }, 00:11:12.458 { 00:11:12.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.458 "dma_device_type": 2 00:11:12.458 } 00:11:12.458 ], 00:11:12.458 "driver_specific": {} 00:11:12.458 } 00:11:12.458 ] 00:11:12.458 09:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.458 09:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:12.458 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:12.458 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.458 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.458 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.458 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.458 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.458 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.458 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.458 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.458 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.458 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.458 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.458 09:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.458 09:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.717 09:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.717 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.717 "name": "Existed_Raid", 00:11:12.717 "uuid": "2e7b89c9-3530-4247-a514-688d9f59bc14", 00:11:12.717 "strip_size_kb": 64, 00:11:12.717 "state": "configuring", 00:11:12.717 "raid_level": "concat", 00:11:12.717 "superblock": true, 00:11:12.717 "num_base_bdevs": 4, 00:11:12.717 "num_base_bdevs_discovered": 3, 00:11:12.717 "num_base_bdevs_operational": 4, 00:11:12.717 "base_bdevs_list": [ 00:11:12.717 { 00:11:12.718 "name": "BaseBdev1", 00:11:12.718 "uuid": "86c8a5eb-c774-4ef4-8511-e98d557c1478", 00:11:12.718 "is_configured": true, 00:11:12.718 "data_offset": 2048, 00:11:12.718 "data_size": 63488 00:11:12.718 }, 00:11:12.718 { 00:11:12.718 "name": null, 00:11:12.718 "uuid": "cd5cac0d-27c4-4ed3-a2e2-91b0b38108fe", 00:11:12.718 "is_configured": false, 00:11:12.718 "data_offset": 0, 00:11:12.718 "data_size": 63488 00:11:12.718 }, 00:11:12.718 { 00:11:12.718 "name": "BaseBdev3", 00:11:12.718 "uuid": "b88758c2-5ded-4b82-8d21-a2f0db2dfb57", 00:11:12.718 "is_configured": true, 00:11:12.718 "data_offset": 2048, 00:11:12.718 "data_size": 63488 00:11:12.718 }, 00:11:12.718 { 00:11:12.718 "name": "BaseBdev4", 00:11:12.718 "uuid": "55cd9f51-1ffd-4ab9-a989-e0c7763349ef", 00:11:12.718 "is_configured": true, 00:11:12.718 "data_offset": 2048, 00:11:12.718 "data_size": 63488 00:11:12.718 } 00:11:12.718 ] 00:11:12.718 }' 00:11:12.718 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.718 09:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.977 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.977 09:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.977 09:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.977 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:12.977 09:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.977 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:12.977 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:12.978 09:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.978 09:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.978 [2024-10-15 09:10:30.798065] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:12.978 09:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.978 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:12.978 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.978 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.978 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.978 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.978 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.978 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.978 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.978 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.978 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.978 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.978 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.978 09:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.978 09:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.978 09:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.978 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.978 "name": "Existed_Raid", 00:11:12.978 "uuid": "2e7b89c9-3530-4247-a514-688d9f59bc14", 00:11:12.978 "strip_size_kb": 64, 00:11:12.978 "state": "configuring", 00:11:12.978 "raid_level": "concat", 00:11:12.978 "superblock": true, 00:11:12.978 "num_base_bdevs": 4, 00:11:12.978 "num_base_bdevs_discovered": 2, 00:11:12.978 "num_base_bdevs_operational": 4, 00:11:12.978 "base_bdevs_list": [ 00:11:12.978 { 00:11:12.978 "name": "BaseBdev1", 00:11:12.978 "uuid": "86c8a5eb-c774-4ef4-8511-e98d557c1478", 00:11:12.978 "is_configured": true, 00:11:12.978 "data_offset": 2048, 00:11:12.978 "data_size": 63488 00:11:12.978 }, 00:11:12.978 { 00:11:12.978 "name": null, 00:11:12.978 "uuid": "cd5cac0d-27c4-4ed3-a2e2-91b0b38108fe", 00:11:12.978 "is_configured": false, 00:11:12.978 "data_offset": 0, 00:11:12.978 "data_size": 63488 00:11:12.978 }, 00:11:12.978 { 00:11:12.978 "name": null, 00:11:12.978 "uuid": "b88758c2-5ded-4b82-8d21-a2f0db2dfb57", 00:11:12.978 "is_configured": false, 00:11:12.978 "data_offset": 0, 00:11:12.978 "data_size": 63488 00:11:12.978 }, 00:11:12.978 { 00:11:12.978 "name": "BaseBdev4", 00:11:12.978 "uuid": "55cd9f51-1ffd-4ab9-a989-e0c7763349ef", 00:11:12.978 "is_configured": true, 00:11:12.978 "data_offset": 2048, 00:11:12.978 "data_size": 63488 00:11:12.978 } 00:11:12.978 ] 00:11:12.978 }' 00:11:12.978 09:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.978 09:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.546 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.546 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:13.546 09:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.546 09:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.546 09:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.546 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:13.546 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:13.546 09:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.546 09:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.546 [2024-10-15 09:10:31.281293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:13.546 09:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.546 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:13.546 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.546 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.546 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:13.546 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.546 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.546 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.546 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.546 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.546 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.546 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.546 09:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.546 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.546 09:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.546 09:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.546 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.546 "name": "Existed_Raid", 00:11:13.546 "uuid": "2e7b89c9-3530-4247-a514-688d9f59bc14", 00:11:13.546 "strip_size_kb": 64, 00:11:13.546 "state": "configuring", 00:11:13.546 "raid_level": "concat", 00:11:13.546 "superblock": true, 00:11:13.546 "num_base_bdevs": 4, 00:11:13.546 "num_base_bdevs_discovered": 3, 00:11:13.546 "num_base_bdevs_operational": 4, 00:11:13.546 "base_bdevs_list": [ 00:11:13.546 { 00:11:13.546 "name": "BaseBdev1", 00:11:13.546 "uuid": "86c8a5eb-c774-4ef4-8511-e98d557c1478", 00:11:13.546 "is_configured": true, 00:11:13.546 "data_offset": 2048, 00:11:13.546 "data_size": 63488 00:11:13.546 }, 00:11:13.546 { 00:11:13.546 "name": null, 00:11:13.546 "uuid": "cd5cac0d-27c4-4ed3-a2e2-91b0b38108fe", 00:11:13.546 "is_configured": false, 00:11:13.546 "data_offset": 0, 00:11:13.546 "data_size": 63488 00:11:13.546 }, 00:11:13.546 { 00:11:13.546 "name": "BaseBdev3", 00:11:13.546 "uuid": "b88758c2-5ded-4b82-8d21-a2f0db2dfb57", 00:11:13.546 "is_configured": true, 00:11:13.546 "data_offset": 2048, 00:11:13.546 "data_size": 63488 00:11:13.546 }, 00:11:13.546 { 00:11:13.546 "name": "BaseBdev4", 00:11:13.546 "uuid": "55cd9f51-1ffd-4ab9-a989-e0c7763349ef", 00:11:13.546 "is_configured": true, 00:11:13.546 "data_offset": 2048, 00:11:13.546 "data_size": 63488 00:11:13.546 } 00:11:13.546 ] 00:11:13.546 }' 00:11:13.546 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.546 09:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.115 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.115 09:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.115 09:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.115 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:14.115 09:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.115 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:14.115 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:14.115 09:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.115 09:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.115 [2024-10-15 09:10:31.780532] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:14.115 09:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.115 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:14.115 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.115 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.115 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.115 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.115 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.115 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.115 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.115 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.115 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.115 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.115 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.115 09:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.115 09:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.115 09:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.115 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.115 "name": "Existed_Raid", 00:11:14.115 "uuid": "2e7b89c9-3530-4247-a514-688d9f59bc14", 00:11:14.115 "strip_size_kb": 64, 00:11:14.115 "state": "configuring", 00:11:14.115 "raid_level": "concat", 00:11:14.115 "superblock": true, 00:11:14.115 "num_base_bdevs": 4, 00:11:14.115 "num_base_bdevs_discovered": 2, 00:11:14.115 "num_base_bdevs_operational": 4, 00:11:14.115 "base_bdevs_list": [ 00:11:14.115 { 00:11:14.115 "name": null, 00:11:14.115 "uuid": "86c8a5eb-c774-4ef4-8511-e98d557c1478", 00:11:14.115 "is_configured": false, 00:11:14.115 "data_offset": 0, 00:11:14.115 "data_size": 63488 00:11:14.115 }, 00:11:14.115 { 00:11:14.115 "name": null, 00:11:14.115 "uuid": "cd5cac0d-27c4-4ed3-a2e2-91b0b38108fe", 00:11:14.115 "is_configured": false, 00:11:14.115 "data_offset": 0, 00:11:14.115 "data_size": 63488 00:11:14.115 }, 00:11:14.115 { 00:11:14.115 "name": "BaseBdev3", 00:11:14.115 "uuid": "b88758c2-5ded-4b82-8d21-a2f0db2dfb57", 00:11:14.115 "is_configured": true, 00:11:14.115 "data_offset": 2048, 00:11:14.115 "data_size": 63488 00:11:14.115 }, 00:11:14.115 { 00:11:14.115 "name": "BaseBdev4", 00:11:14.115 "uuid": "55cd9f51-1ffd-4ab9-a989-e0c7763349ef", 00:11:14.115 "is_configured": true, 00:11:14.115 "data_offset": 2048, 00:11:14.115 "data_size": 63488 00:11:14.115 } 00:11:14.115 ] 00:11:14.115 }' 00:11:14.115 09:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.115 09:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.685 09:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.685 09:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.685 09:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.685 09:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:14.685 09:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.685 09:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:14.685 09:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:14.685 09:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.685 09:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.685 [2024-10-15 09:10:32.397162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:14.685 09:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.685 09:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:14.685 09:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.685 09:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.685 09:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.685 09:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.685 09:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.685 09:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.685 09:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.685 09:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.685 09:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.685 09:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.685 09:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.685 09:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.685 09:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.685 09:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.685 09:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.685 "name": "Existed_Raid", 00:11:14.685 "uuid": "2e7b89c9-3530-4247-a514-688d9f59bc14", 00:11:14.685 "strip_size_kb": 64, 00:11:14.685 "state": "configuring", 00:11:14.685 "raid_level": "concat", 00:11:14.685 "superblock": true, 00:11:14.685 "num_base_bdevs": 4, 00:11:14.685 "num_base_bdevs_discovered": 3, 00:11:14.685 "num_base_bdevs_operational": 4, 00:11:14.685 "base_bdevs_list": [ 00:11:14.685 { 00:11:14.685 "name": null, 00:11:14.685 "uuid": "86c8a5eb-c774-4ef4-8511-e98d557c1478", 00:11:14.685 "is_configured": false, 00:11:14.685 "data_offset": 0, 00:11:14.685 "data_size": 63488 00:11:14.685 }, 00:11:14.685 { 00:11:14.685 "name": "BaseBdev2", 00:11:14.685 "uuid": "cd5cac0d-27c4-4ed3-a2e2-91b0b38108fe", 00:11:14.685 "is_configured": true, 00:11:14.685 "data_offset": 2048, 00:11:14.685 "data_size": 63488 00:11:14.685 }, 00:11:14.685 { 00:11:14.685 "name": "BaseBdev3", 00:11:14.685 "uuid": "b88758c2-5ded-4b82-8d21-a2f0db2dfb57", 00:11:14.685 "is_configured": true, 00:11:14.685 "data_offset": 2048, 00:11:14.685 "data_size": 63488 00:11:14.685 }, 00:11:14.685 { 00:11:14.685 "name": "BaseBdev4", 00:11:14.685 "uuid": "55cd9f51-1ffd-4ab9-a989-e0c7763349ef", 00:11:14.685 "is_configured": true, 00:11:14.685 "data_offset": 2048, 00:11:14.685 "data_size": 63488 00:11:14.685 } 00:11:14.685 ] 00:11:14.685 }' 00:11:14.685 09:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.685 09:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.265 09:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.265 09:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.265 09:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.265 09:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:15.265 09:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.265 09:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:15.265 09:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:15.266 09:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.266 09:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.266 09:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.266 09:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.266 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 86c8a5eb-c774-4ef4-8511-e98d557c1478 00:11:15.266 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.266 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.266 [2024-10-15 09:10:33.059724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:15.266 [2024-10-15 09:10:33.060018] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:15.266 [2024-10-15 09:10:33.060032] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:15.266 [2024-10-15 09:10:33.060355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:15.266 [2024-10-15 09:10:33.060527] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:15.266 [2024-10-15 09:10:33.060543] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:15.266 NewBaseBdev 00:11:15.266 [2024-10-15 09:10:33.060690] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.266 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.266 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:15.266 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:15.266 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:15.266 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:15.266 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:15.266 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:15.266 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:15.266 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.266 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.266 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.266 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:15.266 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.266 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.266 [ 00:11:15.266 { 00:11:15.266 "name": "NewBaseBdev", 00:11:15.266 "aliases": [ 00:11:15.266 "86c8a5eb-c774-4ef4-8511-e98d557c1478" 00:11:15.266 ], 00:11:15.266 "product_name": "Malloc disk", 00:11:15.266 "block_size": 512, 00:11:15.266 "num_blocks": 65536, 00:11:15.266 "uuid": "86c8a5eb-c774-4ef4-8511-e98d557c1478", 00:11:15.266 "assigned_rate_limits": { 00:11:15.266 "rw_ios_per_sec": 0, 00:11:15.266 "rw_mbytes_per_sec": 0, 00:11:15.266 "r_mbytes_per_sec": 0, 00:11:15.266 "w_mbytes_per_sec": 0 00:11:15.266 }, 00:11:15.266 "claimed": true, 00:11:15.266 "claim_type": "exclusive_write", 00:11:15.266 "zoned": false, 00:11:15.266 "supported_io_types": { 00:11:15.266 "read": true, 00:11:15.266 "write": true, 00:11:15.266 "unmap": true, 00:11:15.266 "flush": true, 00:11:15.266 "reset": true, 00:11:15.266 "nvme_admin": false, 00:11:15.266 "nvme_io": false, 00:11:15.266 "nvme_io_md": false, 00:11:15.266 "write_zeroes": true, 00:11:15.266 "zcopy": true, 00:11:15.266 "get_zone_info": false, 00:11:15.266 "zone_management": false, 00:11:15.266 "zone_append": false, 00:11:15.266 "compare": false, 00:11:15.266 "compare_and_write": false, 00:11:15.266 "abort": true, 00:11:15.266 "seek_hole": false, 00:11:15.266 "seek_data": false, 00:11:15.266 "copy": true, 00:11:15.266 "nvme_iov_md": false 00:11:15.266 }, 00:11:15.266 "memory_domains": [ 00:11:15.266 { 00:11:15.266 "dma_device_id": "system", 00:11:15.266 "dma_device_type": 1 00:11:15.266 }, 00:11:15.266 { 00:11:15.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.266 "dma_device_type": 2 00:11:15.266 } 00:11:15.266 ], 00:11:15.266 "driver_specific": {} 00:11:15.266 } 00:11:15.266 ] 00:11:15.266 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.266 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:15.266 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:15.266 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.266 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.266 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:15.266 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.266 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.266 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.266 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.266 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.266 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.266 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.266 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.266 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.266 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.266 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.526 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.526 "name": "Existed_Raid", 00:11:15.526 "uuid": "2e7b89c9-3530-4247-a514-688d9f59bc14", 00:11:15.526 "strip_size_kb": 64, 00:11:15.526 "state": "online", 00:11:15.526 "raid_level": "concat", 00:11:15.526 "superblock": true, 00:11:15.526 "num_base_bdevs": 4, 00:11:15.526 "num_base_bdevs_discovered": 4, 00:11:15.526 "num_base_bdevs_operational": 4, 00:11:15.526 "base_bdevs_list": [ 00:11:15.526 { 00:11:15.526 "name": "NewBaseBdev", 00:11:15.526 "uuid": "86c8a5eb-c774-4ef4-8511-e98d557c1478", 00:11:15.526 "is_configured": true, 00:11:15.526 "data_offset": 2048, 00:11:15.526 "data_size": 63488 00:11:15.526 }, 00:11:15.526 { 00:11:15.526 "name": "BaseBdev2", 00:11:15.526 "uuid": "cd5cac0d-27c4-4ed3-a2e2-91b0b38108fe", 00:11:15.526 "is_configured": true, 00:11:15.526 "data_offset": 2048, 00:11:15.526 "data_size": 63488 00:11:15.526 }, 00:11:15.526 { 00:11:15.526 "name": "BaseBdev3", 00:11:15.526 "uuid": "b88758c2-5ded-4b82-8d21-a2f0db2dfb57", 00:11:15.526 "is_configured": true, 00:11:15.526 "data_offset": 2048, 00:11:15.526 "data_size": 63488 00:11:15.526 }, 00:11:15.526 { 00:11:15.526 "name": "BaseBdev4", 00:11:15.526 "uuid": "55cd9f51-1ffd-4ab9-a989-e0c7763349ef", 00:11:15.526 "is_configured": true, 00:11:15.526 "data_offset": 2048, 00:11:15.526 "data_size": 63488 00:11:15.526 } 00:11:15.526 ] 00:11:15.526 }' 00:11:15.526 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.526 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.786 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:15.786 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:15.786 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:15.786 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:15.786 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:15.786 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:15.786 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:15.786 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.786 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.786 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:15.786 [2024-10-15 09:10:33.619275] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:15.786 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.786 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:15.786 "name": "Existed_Raid", 00:11:15.786 "aliases": [ 00:11:15.786 "2e7b89c9-3530-4247-a514-688d9f59bc14" 00:11:15.786 ], 00:11:15.786 "product_name": "Raid Volume", 00:11:15.786 "block_size": 512, 00:11:15.786 "num_blocks": 253952, 00:11:15.786 "uuid": "2e7b89c9-3530-4247-a514-688d9f59bc14", 00:11:15.786 "assigned_rate_limits": { 00:11:15.786 "rw_ios_per_sec": 0, 00:11:15.786 "rw_mbytes_per_sec": 0, 00:11:15.786 "r_mbytes_per_sec": 0, 00:11:15.786 "w_mbytes_per_sec": 0 00:11:15.786 }, 00:11:15.786 "claimed": false, 00:11:15.786 "zoned": false, 00:11:15.786 "supported_io_types": { 00:11:15.786 "read": true, 00:11:15.786 "write": true, 00:11:15.786 "unmap": true, 00:11:15.786 "flush": true, 00:11:15.786 "reset": true, 00:11:15.786 "nvme_admin": false, 00:11:15.786 "nvme_io": false, 00:11:15.786 "nvme_io_md": false, 00:11:15.786 "write_zeroes": true, 00:11:15.786 "zcopy": false, 00:11:15.786 "get_zone_info": false, 00:11:15.786 "zone_management": false, 00:11:15.786 "zone_append": false, 00:11:15.786 "compare": false, 00:11:15.786 "compare_and_write": false, 00:11:15.786 "abort": false, 00:11:15.786 "seek_hole": false, 00:11:15.786 "seek_data": false, 00:11:15.786 "copy": false, 00:11:15.786 "nvme_iov_md": false 00:11:15.786 }, 00:11:15.786 "memory_domains": [ 00:11:15.786 { 00:11:15.786 "dma_device_id": "system", 00:11:15.786 "dma_device_type": 1 00:11:15.786 }, 00:11:15.786 { 00:11:15.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.786 "dma_device_type": 2 00:11:15.786 }, 00:11:15.786 { 00:11:15.786 "dma_device_id": "system", 00:11:15.786 "dma_device_type": 1 00:11:15.786 }, 00:11:15.786 { 00:11:15.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.786 "dma_device_type": 2 00:11:15.786 }, 00:11:15.786 { 00:11:15.786 "dma_device_id": "system", 00:11:15.786 "dma_device_type": 1 00:11:15.786 }, 00:11:15.786 { 00:11:15.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.786 "dma_device_type": 2 00:11:15.786 }, 00:11:15.786 { 00:11:15.786 "dma_device_id": "system", 00:11:15.786 "dma_device_type": 1 00:11:15.786 }, 00:11:15.786 { 00:11:15.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.786 "dma_device_type": 2 00:11:15.786 } 00:11:15.786 ], 00:11:15.786 "driver_specific": { 00:11:15.786 "raid": { 00:11:15.786 "uuid": "2e7b89c9-3530-4247-a514-688d9f59bc14", 00:11:15.786 "strip_size_kb": 64, 00:11:15.786 "state": "online", 00:11:15.786 "raid_level": "concat", 00:11:15.786 "superblock": true, 00:11:15.786 "num_base_bdevs": 4, 00:11:15.786 "num_base_bdevs_discovered": 4, 00:11:15.786 "num_base_bdevs_operational": 4, 00:11:15.786 "base_bdevs_list": [ 00:11:15.786 { 00:11:15.786 "name": "NewBaseBdev", 00:11:15.786 "uuid": "86c8a5eb-c774-4ef4-8511-e98d557c1478", 00:11:15.786 "is_configured": true, 00:11:15.786 "data_offset": 2048, 00:11:15.786 "data_size": 63488 00:11:15.786 }, 00:11:15.786 { 00:11:15.786 "name": "BaseBdev2", 00:11:15.786 "uuid": "cd5cac0d-27c4-4ed3-a2e2-91b0b38108fe", 00:11:15.786 "is_configured": true, 00:11:15.786 "data_offset": 2048, 00:11:15.786 "data_size": 63488 00:11:15.786 }, 00:11:15.786 { 00:11:15.786 "name": "BaseBdev3", 00:11:15.786 "uuid": "b88758c2-5ded-4b82-8d21-a2f0db2dfb57", 00:11:15.786 "is_configured": true, 00:11:15.786 "data_offset": 2048, 00:11:15.786 "data_size": 63488 00:11:15.786 }, 00:11:15.786 { 00:11:15.786 "name": "BaseBdev4", 00:11:15.786 "uuid": "55cd9f51-1ffd-4ab9-a989-e0c7763349ef", 00:11:15.786 "is_configured": true, 00:11:15.786 "data_offset": 2048, 00:11:15.786 "data_size": 63488 00:11:15.786 } 00:11:15.786 ] 00:11:15.786 } 00:11:15.786 } 00:11:15.786 }' 00:11:15.786 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:16.046 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:16.046 BaseBdev2 00:11:16.046 BaseBdev3 00:11:16.046 BaseBdev4' 00:11:16.046 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.046 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:16.046 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.046 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:16.046 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.046 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.046 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.046 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.046 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.046 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.046 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.046 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:16.046 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.046 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.046 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.046 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.046 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.047 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.047 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.047 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:16.047 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.047 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.047 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.047 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.047 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.047 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.047 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.047 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:16.047 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.047 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.047 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.047 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.047 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.047 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.047 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:16.047 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.047 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.047 [2024-10-15 09:10:33.938342] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:16.047 [2024-10-15 09:10:33.938454] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:16.047 [2024-10-15 09:10:33.938602] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:16.047 [2024-10-15 09:10:33.938721] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:16.047 [2024-10-15 09:10:33.938805] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:16.306 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.306 09:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72062 00:11:16.306 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 72062 ']' 00:11:16.306 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 72062 00:11:16.306 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:16.306 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:16.306 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72062 00:11:16.306 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:16.306 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:16.306 killing process with pid 72062 00:11:16.306 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72062' 00:11:16.306 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 72062 00:11:16.306 [2024-10-15 09:10:33.978971] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:16.306 09:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 72062 00:11:16.566 [2024-10-15 09:10:34.395716] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:17.949 09:10:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:17.949 00:11:17.949 real 0m11.993s 00:11:17.949 user 0m18.918s 00:11:17.949 sys 0m2.260s 00:11:17.949 09:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:17.949 ************************************ 00:11:17.949 END TEST raid_state_function_test_sb 00:11:17.949 ************************************ 00:11:17.949 09:10:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.949 09:10:35 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:17.949 09:10:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:17.949 09:10:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:17.949 09:10:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:17.949 ************************************ 00:11:17.949 START TEST raid_superblock_test 00:11:17.949 ************************************ 00:11:17.949 09:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:11:17.949 09:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:17.949 09:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:17.949 09:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:17.949 09:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:17.949 09:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:17.949 09:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:17.949 09:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:17.949 09:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:17.949 09:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:17.949 09:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:17.949 09:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:17.949 09:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:17.949 09:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:17.949 09:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:17.949 09:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:17.949 09:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:17.949 09:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72739 00:11:17.949 09:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:17.949 09:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72739 00:11:17.949 09:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72739 ']' 00:11:17.949 09:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.949 09:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:17.949 09:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.949 09:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:17.949 09:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.949 [2024-10-15 09:10:35.719461] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:11:17.949 [2024-10-15 09:10:35.719733] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72739 ] 00:11:18.209 [2024-10-15 09:10:35.896904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.209 [2024-10-15 09:10:36.017736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.470 [2024-10-15 09:10:36.225082] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:18.470 [2024-10-15 09:10:36.225213] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:18.729 09:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:18.729 09:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:18.729 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:18.729 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:18.729 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:18.729 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:18.729 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:18.729 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:18.729 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:18.729 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:18.729 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:18.729 09:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.729 09:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.989 malloc1 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.990 [2024-10-15 09:10:36.639481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:18.990 [2024-10-15 09:10:36.639608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.990 [2024-10-15 09:10:36.639649] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:18.990 [2024-10-15 09:10:36.639678] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.990 [2024-10-15 09:10:36.642048] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.990 [2024-10-15 09:10:36.642126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:18.990 pt1 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.990 malloc2 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.990 [2024-10-15 09:10:36.700123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:18.990 [2024-10-15 09:10:36.700234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.990 [2024-10-15 09:10:36.700260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:18.990 [2024-10-15 09:10:36.700270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.990 [2024-10-15 09:10:36.702578] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.990 [2024-10-15 09:10:36.702617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:18.990 pt2 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.990 malloc3 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.990 [2024-10-15 09:10:36.770545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:18.990 [2024-10-15 09:10:36.770717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.990 [2024-10-15 09:10:36.770762] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:18.990 [2024-10-15 09:10:36.770794] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.990 [2024-10-15 09:10:36.773230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.990 [2024-10-15 09:10:36.773346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:18.990 pt3 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.990 malloc4 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.990 [2024-10-15 09:10:36.830074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:18.990 [2024-10-15 09:10:36.830201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.990 [2024-10-15 09:10:36.830228] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:18.990 [2024-10-15 09:10:36.830239] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.990 [2024-10-15 09:10:36.832417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.990 [2024-10-15 09:10:36.832457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:18.990 pt4 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.990 [2024-10-15 09:10:36.842113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:18.990 [2024-10-15 09:10:36.843930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:18.990 [2024-10-15 09:10:36.843988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:18.990 [2024-10-15 09:10:36.844048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:18.990 [2024-10-15 09:10:36.844238] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:18.990 [2024-10-15 09:10:36.844249] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:18.990 [2024-10-15 09:10:36.844527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:18.990 [2024-10-15 09:10:36.844679] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:18.990 [2024-10-15 09:10:36.844704] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:18.990 [2024-10-15 09:10:36.844910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.990 09:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.250 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.250 "name": "raid_bdev1", 00:11:19.250 "uuid": "01bb235f-1d3c-4c01-897c-f8e994657c1f", 00:11:19.250 "strip_size_kb": 64, 00:11:19.250 "state": "online", 00:11:19.250 "raid_level": "concat", 00:11:19.250 "superblock": true, 00:11:19.250 "num_base_bdevs": 4, 00:11:19.250 "num_base_bdevs_discovered": 4, 00:11:19.250 "num_base_bdevs_operational": 4, 00:11:19.250 "base_bdevs_list": [ 00:11:19.250 { 00:11:19.250 "name": "pt1", 00:11:19.250 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:19.250 "is_configured": true, 00:11:19.250 "data_offset": 2048, 00:11:19.250 "data_size": 63488 00:11:19.250 }, 00:11:19.250 { 00:11:19.250 "name": "pt2", 00:11:19.250 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:19.250 "is_configured": true, 00:11:19.250 "data_offset": 2048, 00:11:19.251 "data_size": 63488 00:11:19.251 }, 00:11:19.251 { 00:11:19.251 "name": "pt3", 00:11:19.251 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:19.251 "is_configured": true, 00:11:19.251 "data_offset": 2048, 00:11:19.251 "data_size": 63488 00:11:19.251 }, 00:11:19.251 { 00:11:19.251 "name": "pt4", 00:11:19.251 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:19.251 "is_configured": true, 00:11:19.251 "data_offset": 2048, 00:11:19.251 "data_size": 63488 00:11:19.251 } 00:11:19.251 ] 00:11:19.251 }' 00:11:19.251 09:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.251 09:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.510 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:19.510 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:19.510 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:19.510 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:19.510 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:19.510 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:19.510 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:19.510 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:19.510 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.510 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.510 [2024-10-15 09:10:37.329646] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:19.510 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.510 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:19.510 "name": "raid_bdev1", 00:11:19.510 "aliases": [ 00:11:19.510 "01bb235f-1d3c-4c01-897c-f8e994657c1f" 00:11:19.510 ], 00:11:19.510 "product_name": "Raid Volume", 00:11:19.510 "block_size": 512, 00:11:19.510 "num_blocks": 253952, 00:11:19.510 "uuid": "01bb235f-1d3c-4c01-897c-f8e994657c1f", 00:11:19.510 "assigned_rate_limits": { 00:11:19.510 "rw_ios_per_sec": 0, 00:11:19.511 "rw_mbytes_per_sec": 0, 00:11:19.511 "r_mbytes_per_sec": 0, 00:11:19.511 "w_mbytes_per_sec": 0 00:11:19.511 }, 00:11:19.511 "claimed": false, 00:11:19.511 "zoned": false, 00:11:19.511 "supported_io_types": { 00:11:19.511 "read": true, 00:11:19.511 "write": true, 00:11:19.511 "unmap": true, 00:11:19.511 "flush": true, 00:11:19.511 "reset": true, 00:11:19.511 "nvme_admin": false, 00:11:19.511 "nvme_io": false, 00:11:19.511 "nvme_io_md": false, 00:11:19.511 "write_zeroes": true, 00:11:19.511 "zcopy": false, 00:11:19.511 "get_zone_info": false, 00:11:19.511 "zone_management": false, 00:11:19.511 "zone_append": false, 00:11:19.511 "compare": false, 00:11:19.511 "compare_and_write": false, 00:11:19.511 "abort": false, 00:11:19.511 "seek_hole": false, 00:11:19.511 "seek_data": false, 00:11:19.511 "copy": false, 00:11:19.511 "nvme_iov_md": false 00:11:19.511 }, 00:11:19.511 "memory_domains": [ 00:11:19.511 { 00:11:19.511 "dma_device_id": "system", 00:11:19.511 "dma_device_type": 1 00:11:19.511 }, 00:11:19.511 { 00:11:19.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.511 "dma_device_type": 2 00:11:19.511 }, 00:11:19.511 { 00:11:19.511 "dma_device_id": "system", 00:11:19.511 "dma_device_type": 1 00:11:19.511 }, 00:11:19.511 { 00:11:19.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.511 "dma_device_type": 2 00:11:19.511 }, 00:11:19.511 { 00:11:19.511 "dma_device_id": "system", 00:11:19.511 "dma_device_type": 1 00:11:19.511 }, 00:11:19.511 { 00:11:19.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.511 "dma_device_type": 2 00:11:19.511 }, 00:11:19.511 { 00:11:19.511 "dma_device_id": "system", 00:11:19.511 "dma_device_type": 1 00:11:19.511 }, 00:11:19.511 { 00:11:19.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.511 "dma_device_type": 2 00:11:19.511 } 00:11:19.511 ], 00:11:19.511 "driver_specific": { 00:11:19.511 "raid": { 00:11:19.511 "uuid": "01bb235f-1d3c-4c01-897c-f8e994657c1f", 00:11:19.511 "strip_size_kb": 64, 00:11:19.511 "state": "online", 00:11:19.511 "raid_level": "concat", 00:11:19.511 "superblock": true, 00:11:19.511 "num_base_bdevs": 4, 00:11:19.511 "num_base_bdevs_discovered": 4, 00:11:19.511 "num_base_bdevs_operational": 4, 00:11:19.511 "base_bdevs_list": [ 00:11:19.511 { 00:11:19.511 "name": "pt1", 00:11:19.511 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:19.511 "is_configured": true, 00:11:19.511 "data_offset": 2048, 00:11:19.511 "data_size": 63488 00:11:19.511 }, 00:11:19.511 { 00:11:19.511 "name": "pt2", 00:11:19.511 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:19.511 "is_configured": true, 00:11:19.511 "data_offset": 2048, 00:11:19.511 "data_size": 63488 00:11:19.511 }, 00:11:19.511 { 00:11:19.511 "name": "pt3", 00:11:19.511 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:19.511 "is_configured": true, 00:11:19.511 "data_offset": 2048, 00:11:19.511 "data_size": 63488 00:11:19.511 }, 00:11:19.511 { 00:11:19.511 "name": "pt4", 00:11:19.511 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:19.511 "is_configured": true, 00:11:19.511 "data_offset": 2048, 00:11:19.511 "data_size": 63488 00:11:19.511 } 00:11:19.511 ] 00:11:19.511 } 00:11:19.511 } 00:11:19.511 }' 00:11:19.511 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:19.771 pt2 00:11:19.771 pt3 00:11:19.771 pt4' 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:19.771 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:19.772 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.772 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.772 [2024-10-15 09:10:37.637087] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:19.772 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=01bb235f-1d3c-4c01-897c-f8e994657c1f 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 01bb235f-1d3c-4c01-897c-f8e994657c1f ']' 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.032 [2024-10-15 09:10:37.692647] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:20.032 [2024-10-15 09:10:37.692759] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:20.032 [2024-10-15 09:10:37.692906] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:20.032 [2024-10-15 09:10:37.693007] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:20.032 [2024-10-15 09:10:37.693072] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.032 [2024-10-15 09:10:37.860398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:20.032 [2024-10-15 09:10:37.862628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:20.032 [2024-10-15 09:10:37.862754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:20.032 [2024-10-15 09:10:37.862834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:20.032 [2024-10-15 09:10:37.862929] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:20.032 [2024-10-15 09:10:37.863038] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:20.032 [2024-10-15 09:10:37.863113] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:20.032 [2024-10-15 09:10:37.863178] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:20.032 [2024-10-15 09:10:37.863245] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:20.032 [2024-10-15 09:10:37.863293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:20.032 request: 00:11:20.032 { 00:11:20.032 "name": "raid_bdev1", 00:11:20.032 "raid_level": "concat", 00:11:20.032 "base_bdevs": [ 00:11:20.032 "malloc1", 00:11:20.032 "malloc2", 00:11:20.032 "malloc3", 00:11:20.032 "malloc4" 00:11:20.032 ], 00:11:20.032 "strip_size_kb": 64, 00:11:20.032 "superblock": false, 00:11:20.032 "method": "bdev_raid_create", 00:11:20.032 "req_id": 1 00:11:20.032 } 00:11:20.032 Got JSON-RPC error response 00:11:20.032 response: 00:11:20.032 { 00:11:20.032 "code": -17, 00:11:20.032 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:20.032 } 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.032 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.304 [2024-10-15 09:10:37.928231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:20.304 [2024-10-15 09:10:37.928377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.304 [2024-10-15 09:10:37.928403] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:20.304 [2024-10-15 09:10:37.928416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.304 [2024-10-15 09:10:37.930866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.304 [2024-10-15 09:10:37.930912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:20.304 [2024-10-15 09:10:37.931017] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:20.304 [2024-10-15 09:10:37.931097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:20.304 pt1 00:11:20.304 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.304 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:20.304 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:20.304 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.304 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.304 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.304 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.304 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.304 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.304 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.304 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.304 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.304 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.304 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.304 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.304 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.304 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.304 "name": "raid_bdev1", 00:11:20.304 "uuid": "01bb235f-1d3c-4c01-897c-f8e994657c1f", 00:11:20.304 "strip_size_kb": 64, 00:11:20.304 "state": "configuring", 00:11:20.304 "raid_level": "concat", 00:11:20.304 "superblock": true, 00:11:20.304 "num_base_bdevs": 4, 00:11:20.304 "num_base_bdevs_discovered": 1, 00:11:20.304 "num_base_bdevs_operational": 4, 00:11:20.304 "base_bdevs_list": [ 00:11:20.304 { 00:11:20.304 "name": "pt1", 00:11:20.304 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:20.304 "is_configured": true, 00:11:20.304 "data_offset": 2048, 00:11:20.304 "data_size": 63488 00:11:20.304 }, 00:11:20.304 { 00:11:20.304 "name": null, 00:11:20.304 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:20.304 "is_configured": false, 00:11:20.304 "data_offset": 2048, 00:11:20.304 "data_size": 63488 00:11:20.304 }, 00:11:20.304 { 00:11:20.304 "name": null, 00:11:20.304 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:20.304 "is_configured": false, 00:11:20.304 "data_offset": 2048, 00:11:20.304 "data_size": 63488 00:11:20.304 }, 00:11:20.304 { 00:11:20.304 "name": null, 00:11:20.304 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:20.304 "is_configured": false, 00:11:20.304 "data_offset": 2048, 00:11:20.304 "data_size": 63488 00:11:20.304 } 00:11:20.304 ] 00:11:20.304 }' 00:11:20.304 09:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.304 09:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.573 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:20.573 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:20.573 09:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.573 09:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.573 [2024-10-15 09:10:38.407434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:20.573 [2024-10-15 09:10:38.407585] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.573 [2024-10-15 09:10:38.407622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:20.573 [2024-10-15 09:10:38.407653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.573 [2024-10-15 09:10:38.408235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.573 [2024-10-15 09:10:38.408309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:20.573 [2024-10-15 09:10:38.408438] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:20.573 [2024-10-15 09:10:38.408498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:20.573 pt2 00:11:20.573 09:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.573 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:20.573 09:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.573 09:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.573 [2024-10-15 09:10:38.419408] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:20.573 09:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.573 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:20.573 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:20.574 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.574 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.574 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.574 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.574 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.574 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.574 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.574 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.574 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.574 09:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.574 09:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.574 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.574 09:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.833 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.834 "name": "raid_bdev1", 00:11:20.834 "uuid": "01bb235f-1d3c-4c01-897c-f8e994657c1f", 00:11:20.834 "strip_size_kb": 64, 00:11:20.834 "state": "configuring", 00:11:20.834 "raid_level": "concat", 00:11:20.834 "superblock": true, 00:11:20.834 "num_base_bdevs": 4, 00:11:20.834 "num_base_bdevs_discovered": 1, 00:11:20.834 "num_base_bdevs_operational": 4, 00:11:20.834 "base_bdevs_list": [ 00:11:20.834 { 00:11:20.834 "name": "pt1", 00:11:20.834 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:20.834 "is_configured": true, 00:11:20.834 "data_offset": 2048, 00:11:20.834 "data_size": 63488 00:11:20.834 }, 00:11:20.834 { 00:11:20.834 "name": null, 00:11:20.834 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:20.834 "is_configured": false, 00:11:20.834 "data_offset": 0, 00:11:20.834 "data_size": 63488 00:11:20.834 }, 00:11:20.834 { 00:11:20.834 "name": null, 00:11:20.834 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:20.834 "is_configured": false, 00:11:20.834 "data_offset": 2048, 00:11:20.834 "data_size": 63488 00:11:20.834 }, 00:11:20.834 { 00:11:20.834 "name": null, 00:11:20.834 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:20.834 "is_configured": false, 00:11:20.834 "data_offset": 2048, 00:11:20.834 "data_size": 63488 00:11:20.834 } 00:11:20.834 ] 00:11:20.834 }' 00:11:20.834 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.834 09:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.093 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:21.093 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:21.093 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:21.093 09:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.093 09:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.093 [2024-10-15 09:10:38.918602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:21.093 [2024-10-15 09:10:38.918676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.093 [2024-10-15 09:10:38.918706] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:21.093 [2024-10-15 09:10:38.918716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.093 [2024-10-15 09:10:38.919191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.093 [2024-10-15 09:10:38.919223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:21.093 [2024-10-15 09:10:38.919314] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:21.093 [2024-10-15 09:10:38.919337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:21.093 pt2 00:11:21.093 09:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.093 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:21.093 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:21.093 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:21.093 09:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.093 09:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.093 [2024-10-15 09:10:38.930575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:21.093 [2024-10-15 09:10:38.930649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.093 [2024-10-15 09:10:38.930679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:21.093 [2024-10-15 09:10:38.930709] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.093 [2024-10-15 09:10:38.931206] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.093 [2024-10-15 09:10:38.931232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:21.093 [2024-10-15 09:10:38.931334] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:21.093 [2024-10-15 09:10:38.931358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:21.093 pt3 00:11:21.093 09:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.093 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:21.093 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:21.094 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:21.094 09:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.094 09:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.094 [2024-10-15 09:10:38.942503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:21.094 [2024-10-15 09:10:38.942560] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.094 [2024-10-15 09:10:38.942581] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:21.094 [2024-10-15 09:10:38.942590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.094 [2024-10-15 09:10:38.943060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.094 [2024-10-15 09:10:38.943081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:21.094 [2024-10-15 09:10:38.943166] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:21.094 [2024-10-15 09:10:38.943193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:21.094 [2024-10-15 09:10:38.943335] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:21.094 [2024-10-15 09:10:38.943343] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:21.094 [2024-10-15 09:10:38.943584] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:21.094 [2024-10-15 09:10:38.943757] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:21.094 [2024-10-15 09:10:38.943771] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:21.094 [2024-10-15 09:10:38.943914] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.094 pt4 00:11:21.094 09:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.094 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:21.094 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:21.094 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:21.094 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:21.094 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.094 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.094 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.094 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.094 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.094 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.094 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.094 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.094 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.094 09:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.094 09:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.094 09:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.094 09:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.353 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.353 "name": "raid_bdev1", 00:11:21.353 "uuid": "01bb235f-1d3c-4c01-897c-f8e994657c1f", 00:11:21.353 "strip_size_kb": 64, 00:11:21.353 "state": "online", 00:11:21.353 "raid_level": "concat", 00:11:21.353 "superblock": true, 00:11:21.353 "num_base_bdevs": 4, 00:11:21.353 "num_base_bdevs_discovered": 4, 00:11:21.353 "num_base_bdevs_operational": 4, 00:11:21.353 "base_bdevs_list": [ 00:11:21.353 { 00:11:21.353 "name": "pt1", 00:11:21.353 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:21.353 "is_configured": true, 00:11:21.353 "data_offset": 2048, 00:11:21.353 "data_size": 63488 00:11:21.353 }, 00:11:21.353 { 00:11:21.353 "name": "pt2", 00:11:21.353 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:21.353 "is_configured": true, 00:11:21.353 "data_offset": 2048, 00:11:21.353 "data_size": 63488 00:11:21.353 }, 00:11:21.353 { 00:11:21.353 "name": "pt3", 00:11:21.353 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:21.353 "is_configured": true, 00:11:21.353 "data_offset": 2048, 00:11:21.353 "data_size": 63488 00:11:21.353 }, 00:11:21.353 { 00:11:21.353 "name": "pt4", 00:11:21.353 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:21.353 "is_configured": true, 00:11:21.353 "data_offset": 2048, 00:11:21.353 "data_size": 63488 00:11:21.353 } 00:11:21.353 ] 00:11:21.353 }' 00:11:21.353 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.353 09:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.613 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:21.613 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:21.613 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:21.613 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:21.613 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:21.613 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:21.613 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:21.613 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:21.613 09:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.613 09:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.613 [2024-10-15 09:10:39.406185] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:21.613 09:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.613 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:21.613 "name": "raid_bdev1", 00:11:21.614 "aliases": [ 00:11:21.614 "01bb235f-1d3c-4c01-897c-f8e994657c1f" 00:11:21.614 ], 00:11:21.614 "product_name": "Raid Volume", 00:11:21.614 "block_size": 512, 00:11:21.614 "num_blocks": 253952, 00:11:21.614 "uuid": "01bb235f-1d3c-4c01-897c-f8e994657c1f", 00:11:21.614 "assigned_rate_limits": { 00:11:21.614 "rw_ios_per_sec": 0, 00:11:21.614 "rw_mbytes_per_sec": 0, 00:11:21.614 "r_mbytes_per_sec": 0, 00:11:21.614 "w_mbytes_per_sec": 0 00:11:21.614 }, 00:11:21.614 "claimed": false, 00:11:21.614 "zoned": false, 00:11:21.614 "supported_io_types": { 00:11:21.614 "read": true, 00:11:21.614 "write": true, 00:11:21.614 "unmap": true, 00:11:21.614 "flush": true, 00:11:21.614 "reset": true, 00:11:21.614 "nvme_admin": false, 00:11:21.614 "nvme_io": false, 00:11:21.614 "nvme_io_md": false, 00:11:21.614 "write_zeroes": true, 00:11:21.614 "zcopy": false, 00:11:21.614 "get_zone_info": false, 00:11:21.614 "zone_management": false, 00:11:21.614 "zone_append": false, 00:11:21.614 "compare": false, 00:11:21.614 "compare_and_write": false, 00:11:21.614 "abort": false, 00:11:21.614 "seek_hole": false, 00:11:21.614 "seek_data": false, 00:11:21.614 "copy": false, 00:11:21.614 "nvme_iov_md": false 00:11:21.614 }, 00:11:21.614 "memory_domains": [ 00:11:21.614 { 00:11:21.614 "dma_device_id": "system", 00:11:21.614 "dma_device_type": 1 00:11:21.614 }, 00:11:21.614 { 00:11:21.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.614 "dma_device_type": 2 00:11:21.614 }, 00:11:21.614 { 00:11:21.614 "dma_device_id": "system", 00:11:21.614 "dma_device_type": 1 00:11:21.614 }, 00:11:21.614 { 00:11:21.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.614 "dma_device_type": 2 00:11:21.614 }, 00:11:21.614 { 00:11:21.614 "dma_device_id": "system", 00:11:21.614 "dma_device_type": 1 00:11:21.614 }, 00:11:21.614 { 00:11:21.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.614 "dma_device_type": 2 00:11:21.614 }, 00:11:21.614 { 00:11:21.614 "dma_device_id": "system", 00:11:21.614 "dma_device_type": 1 00:11:21.614 }, 00:11:21.614 { 00:11:21.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.614 "dma_device_type": 2 00:11:21.614 } 00:11:21.614 ], 00:11:21.614 "driver_specific": { 00:11:21.614 "raid": { 00:11:21.614 "uuid": "01bb235f-1d3c-4c01-897c-f8e994657c1f", 00:11:21.614 "strip_size_kb": 64, 00:11:21.614 "state": "online", 00:11:21.614 "raid_level": "concat", 00:11:21.614 "superblock": true, 00:11:21.614 "num_base_bdevs": 4, 00:11:21.614 "num_base_bdevs_discovered": 4, 00:11:21.614 "num_base_bdevs_operational": 4, 00:11:21.614 "base_bdevs_list": [ 00:11:21.614 { 00:11:21.614 "name": "pt1", 00:11:21.614 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:21.614 "is_configured": true, 00:11:21.614 "data_offset": 2048, 00:11:21.614 "data_size": 63488 00:11:21.614 }, 00:11:21.614 { 00:11:21.614 "name": "pt2", 00:11:21.614 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:21.614 "is_configured": true, 00:11:21.614 "data_offset": 2048, 00:11:21.614 "data_size": 63488 00:11:21.614 }, 00:11:21.614 { 00:11:21.614 "name": "pt3", 00:11:21.614 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:21.614 "is_configured": true, 00:11:21.614 "data_offset": 2048, 00:11:21.614 "data_size": 63488 00:11:21.614 }, 00:11:21.614 { 00:11:21.614 "name": "pt4", 00:11:21.614 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:21.614 "is_configured": true, 00:11:21.614 "data_offset": 2048, 00:11:21.614 "data_size": 63488 00:11:21.614 } 00:11:21.614 ] 00:11:21.614 } 00:11:21.614 } 00:11:21.614 }' 00:11:21.614 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:21.614 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:21.614 pt2 00:11:21.614 pt3 00:11:21.614 pt4' 00:11:21.614 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:21.874 [2024-10-15 09:10:39.693769] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 01bb235f-1d3c-4c01-897c-f8e994657c1f '!=' 01bb235f-1d3c-4c01-897c-f8e994657c1f ']' 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72739 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72739 ']' 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72739 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72739 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72739' 00:11:21.874 killing process with pid 72739 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 72739 00:11:21.874 [2024-10-15 09:10:39.756370] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:21.874 09:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 72739 00:11:21.874 [2024-10-15 09:10:39.756545] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:21.874 [2024-10-15 09:10:39.756633] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:21.874 [2024-10-15 09:10:39.756744] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:22.442 [2024-10-15 09:10:40.168084] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:23.818 09:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:23.818 00:11:23.818 real 0m5.690s 00:11:23.818 user 0m8.148s 00:11:23.818 sys 0m1.026s 00:11:23.818 09:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:23.818 09:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.818 ************************************ 00:11:23.818 END TEST raid_superblock_test 00:11:23.818 ************************************ 00:11:23.818 09:10:41 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:23.818 09:10:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:23.818 09:10:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:23.818 09:10:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:23.818 ************************************ 00:11:23.818 START TEST raid_read_error_test 00:11:23.818 ************************************ 00:11:23.818 09:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:11:23.818 09:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:23.818 09:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:23.818 09:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:23.818 09:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:23.818 09:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:23.818 09:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:23.818 09:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:23.818 09:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:23.818 09:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:23.818 09:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:23.818 09:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:23.818 09:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:23.818 09:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:23.818 09:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:23.818 09:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:23.818 09:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:23.818 09:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:23.818 09:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:23.818 09:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:23.818 09:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:23.818 09:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:23.818 09:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:23.818 09:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:23.818 09:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:23.818 09:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:23.818 09:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:23.818 09:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:23.818 09:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:23.818 09:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.eaHWvRQT4y 00:11:23.818 09:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:23.819 09:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72998 00:11:23.819 09:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72998 00:11:23.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.819 09:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 72998 ']' 00:11:23.819 09:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.819 09:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:23.819 09:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.819 09:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:23.819 09:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.819 [2024-10-15 09:10:41.488366] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:11:23.819 [2024-10-15 09:10:41.488640] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72998 ] 00:11:23.819 [2024-10-15 09:10:41.649718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.077 [2024-10-15 09:10:41.773250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.336 [2024-10-15 09:10:41.997809] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:24.336 [2024-10-15 09:10:41.997964] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:24.595 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:24.595 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:24.595 09:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:24.595 09:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:24.595 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.595 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.595 BaseBdev1_malloc 00:11:24.595 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.595 09:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:24.595 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.595 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.595 true 00:11:24.595 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.595 09:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:24.595 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.595 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.595 [2024-10-15 09:10:42.451069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:24.595 [2024-10-15 09:10:42.451198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.595 [2024-10-15 09:10:42.451239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:24.595 [2024-10-15 09:10:42.451257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.595 [2024-10-15 09:10:42.453909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.595 [2024-10-15 09:10:42.453966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:24.595 BaseBdev1 00:11:24.595 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.595 09:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:24.595 09:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:24.595 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.595 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.859 BaseBdev2_malloc 00:11:24.859 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.859 09:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:24.859 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.859 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.859 true 00:11:24.859 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.859 09:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:24.859 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.859 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.859 [2024-10-15 09:10:42.523916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:24.859 [2024-10-15 09:10:42.524004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.859 [2024-10-15 09:10:42.524036] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:24.859 [2024-10-15 09:10:42.524053] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.859 [2024-10-15 09:10:42.526773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.859 [2024-10-15 09:10:42.526833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:24.859 BaseBdev2 00:11:24.859 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.859 09:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:24.859 09:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:24.859 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.859 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.859 BaseBdev3_malloc 00:11:24.859 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.859 09:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:24.859 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.859 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.859 true 00:11:24.859 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.859 09:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:24.859 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.859 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.859 [2024-10-15 09:10:42.603308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:24.859 [2024-10-15 09:10:42.603374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.859 [2024-10-15 09:10:42.603403] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:24.859 [2024-10-15 09:10:42.603419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.859 [2024-10-15 09:10:42.606001] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.860 [2024-10-15 09:10:42.606050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:24.860 BaseBdev3 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.860 BaseBdev4_malloc 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.860 true 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.860 [2024-10-15 09:10:42.676272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:24.860 [2024-10-15 09:10:42.676336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.860 [2024-10-15 09:10:42.676366] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:24.860 [2024-10-15 09:10:42.676385] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.860 [2024-10-15 09:10:42.678882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.860 [2024-10-15 09:10:42.678974] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:24.860 BaseBdev4 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.860 [2024-10-15 09:10:42.688318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:24.860 [2024-10-15 09:10:42.690520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:24.860 [2024-10-15 09:10:42.690726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:24.860 [2024-10-15 09:10:42.690840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:24.860 [2024-10-15 09:10:42.691151] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:24.860 [2024-10-15 09:10:42.691173] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:24.860 [2024-10-15 09:10:42.691478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:24.860 [2024-10-15 09:10:42.691673] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:24.860 [2024-10-15 09:10:42.691717] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:24.860 [2024-10-15 09:10:42.691933] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.860 "name": "raid_bdev1", 00:11:24.860 "uuid": "800c631a-c7c9-42e6-9c35-e71cfe7848ea", 00:11:24.860 "strip_size_kb": 64, 00:11:24.860 "state": "online", 00:11:24.860 "raid_level": "concat", 00:11:24.860 "superblock": true, 00:11:24.860 "num_base_bdevs": 4, 00:11:24.860 "num_base_bdevs_discovered": 4, 00:11:24.860 "num_base_bdevs_operational": 4, 00:11:24.860 "base_bdevs_list": [ 00:11:24.860 { 00:11:24.860 "name": "BaseBdev1", 00:11:24.860 "uuid": "c30b9585-1904-5229-ad5e-91b41be18489", 00:11:24.860 "is_configured": true, 00:11:24.860 "data_offset": 2048, 00:11:24.860 "data_size": 63488 00:11:24.860 }, 00:11:24.860 { 00:11:24.860 "name": "BaseBdev2", 00:11:24.860 "uuid": "d0568055-9f4f-59be-bb22-e1ae02f700e7", 00:11:24.860 "is_configured": true, 00:11:24.860 "data_offset": 2048, 00:11:24.860 "data_size": 63488 00:11:24.860 }, 00:11:24.860 { 00:11:24.860 "name": "BaseBdev3", 00:11:24.860 "uuid": "e01c4c8f-2a88-58ec-b9f2-0780b1395c39", 00:11:24.860 "is_configured": true, 00:11:24.860 "data_offset": 2048, 00:11:24.860 "data_size": 63488 00:11:24.860 }, 00:11:24.860 { 00:11:24.860 "name": "BaseBdev4", 00:11:24.860 "uuid": "08732515-4dd3-556c-9bdb-50bf1b5d8d8c", 00:11:24.860 "is_configured": true, 00:11:24.860 "data_offset": 2048, 00:11:24.860 "data_size": 63488 00:11:24.860 } 00:11:24.860 ] 00:11:24.860 }' 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.860 09:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.427 09:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:25.427 09:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:25.427 [2024-10-15 09:10:43.244870] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:26.374 09:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:26.374 09:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.374 09:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.374 09:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.374 09:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:26.374 09:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:26.374 09:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:26.374 09:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:26.374 09:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:26.374 09:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.374 09:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.374 09:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.374 09:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.374 09:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.374 09:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.374 09:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.374 09:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.374 09:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.374 09:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.374 09:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.374 09:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.374 09:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.374 09:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.374 "name": "raid_bdev1", 00:11:26.374 "uuid": "800c631a-c7c9-42e6-9c35-e71cfe7848ea", 00:11:26.374 "strip_size_kb": 64, 00:11:26.374 "state": "online", 00:11:26.374 "raid_level": "concat", 00:11:26.374 "superblock": true, 00:11:26.374 "num_base_bdevs": 4, 00:11:26.374 "num_base_bdevs_discovered": 4, 00:11:26.374 "num_base_bdevs_operational": 4, 00:11:26.374 "base_bdevs_list": [ 00:11:26.374 { 00:11:26.374 "name": "BaseBdev1", 00:11:26.374 "uuid": "c30b9585-1904-5229-ad5e-91b41be18489", 00:11:26.374 "is_configured": true, 00:11:26.374 "data_offset": 2048, 00:11:26.374 "data_size": 63488 00:11:26.374 }, 00:11:26.374 { 00:11:26.374 "name": "BaseBdev2", 00:11:26.374 "uuid": "d0568055-9f4f-59be-bb22-e1ae02f700e7", 00:11:26.374 "is_configured": true, 00:11:26.374 "data_offset": 2048, 00:11:26.374 "data_size": 63488 00:11:26.374 }, 00:11:26.374 { 00:11:26.374 "name": "BaseBdev3", 00:11:26.374 "uuid": "e01c4c8f-2a88-58ec-b9f2-0780b1395c39", 00:11:26.374 "is_configured": true, 00:11:26.374 "data_offset": 2048, 00:11:26.374 "data_size": 63488 00:11:26.374 }, 00:11:26.374 { 00:11:26.374 "name": "BaseBdev4", 00:11:26.374 "uuid": "08732515-4dd3-556c-9bdb-50bf1b5d8d8c", 00:11:26.374 "is_configured": true, 00:11:26.374 "data_offset": 2048, 00:11:26.374 "data_size": 63488 00:11:26.374 } 00:11:26.374 ] 00:11:26.374 }' 00:11:26.374 09:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.374 09:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.942 09:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:26.942 09:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.942 09:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.942 [2024-10-15 09:10:44.670132] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:26.942 [2024-10-15 09:10:44.670250] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:26.942 [2024-10-15 09:10:44.673285] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:26.942 [2024-10-15 09:10:44.673426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.942 [2024-10-15 09:10:44.673503] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:26.942 [2024-10-15 09:10:44.673578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:26.942 { 00:11:26.942 "results": [ 00:11:26.942 { 00:11:26.942 "job": "raid_bdev1", 00:11:26.942 "core_mask": "0x1", 00:11:26.942 "workload": "randrw", 00:11:26.942 "percentage": 50, 00:11:26.942 "status": "finished", 00:11:26.942 "queue_depth": 1, 00:11:26.942 "io_size": 131072, 00:11:26.942 "runtime": 1.426041, 00:11:26.942 "iops": 14042.373255747907, 00:11:26.942 "mibps": 1755.2966569684884, 00:11:26.942 "io_failed": 1, 00:11:26.942 "io_timeout": 0, 00:11:26.942 "avg_latency_us": 99.08693545552353, 00:11:26.942 "min_latency_us": 27.612227074235808, 00:11:26.942 "max_latency_us": 1695.6366812227075 00:11:26.942 } 00:11:26.942 ], 00:11:26.942 "core_count": 1 00:11:26.942 } 00:11:26.942 09:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.942 09:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72998 00:11:26.942 09:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 72998 ']' 00:11:26.942 09:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 72998 00:11:26.942 09:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:26.942 09:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:26.942 09:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72998 00:11:26.942 09:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:26.942 09:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:26.942 09:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72998' 00:11:26.942 killing process with pid 72998 00:11:26.942 09:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 72998 00:11:26.942 [2024-10-15 09:10:44.721313] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:26.942 09:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 72998 00:11:27.201 [2024-10-15 09:10:45.088483] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:28.576 09:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.eaHWvRQT4y 00:11:28.577 09:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:28.577 09:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:28.577 09:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:11:28.577 09:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:28.577 09:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:28.577 09:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:28.577 09:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:11:28.577 00:11:28.577 real 0m5.023s 00:11:28.577 user 0m5.948s 00:11:28.577 sys 0m0.645s 00:11:28.577 09:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:28.577 09:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.577 ************************************ 00:11:28.577 END TEST raid_read_error_test 00:11:28.577 ************************************ 00:11:28.577 09:10:46 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:28.577 09:10:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:28.577 09:10:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:28.577 09:10:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:28.577 ************************************ 00:11:28.577 START TEST raid_write_error_test 00:11:28.577 ************************************ 00:11:28.577 09:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:11:28.577 09:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:28.577 09:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:28.577 09:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:28.577 09:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:28.577 09:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:28.577 09:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:28.577 09:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:28.577 09:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:28.577 09:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:28.894 09:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:28.894 09:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:28.894 09:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:28.894 09:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:28.894 09:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:28.894 09:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:28.894 09:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:28.894 09:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:28.894 09:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:28.894 09:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:28.894 09:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:28.894 09:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:28.894 09:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:28.894 09:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:28.894 09:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:28.894 09:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:28.894 09:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:28.894 09:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:28.894 09:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:28.894 09:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.74ZJWUrQSU 00:11:28.894 09:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73149 00:11:28.894 09:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:28.894 09:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73149 00:11:28.894 09:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 73149 ']' 00:11:28.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.894 09:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.894 09:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:28.894 09:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.894 09:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:28.894 09:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.894 [2024-10-15 09:10:46.580320] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:11:28.894 [2024-10-15 09:10:46.580475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73149 ] 00:11:28.894 [2024-10-15 09:10:46.730702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.153 [2024-10-15 09:10:46.851680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.412 [2024-10-15 09:10:47.057703] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:29.412 [2024-10-15 09:10:47.057788] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:29.671 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:29.671 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:29.671 09:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:29.671 09:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:29.671 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.671 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.671 BaseBdev1_malloc 00:11:29.671 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.671 09:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:29.671 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.671 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.671 true 00:11:29.671 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.671 09:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:29.671 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.671 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.671 [2024-10-15 09:10:47.503424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:29.671 [2024-10-15 09:10:47.503489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.671 [2024-10-15 09:10:47.503516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:29.671 [2024-10-15 09:10:47.503531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.671 [2024-10-15 09:10:47.505988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.671 [2024-10-15 09:10:47.506035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:29.671 BaseBdev1 00:11:29.671 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.671 09:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:29.671 09:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:29.671 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.671 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.671 BaseBdev2_malloc 00:11:29.671 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.671 09:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:29.671 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.671 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.671 true 00:11:29.671 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.671 09:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:29.671 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.671 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.671 [2024-10-15 09:10:47.562835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:29.671 [2024-10-15 09:10:47.562897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.671 [2024-10-15 09:10:47.562922] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:29.671 [2024-10-15 09:10:47.562937] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.671 [2024-10-15 09:10:47.565371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.671 [2024-10-15 09:10:47.565416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:29.930 BaseBdev2 00:11:29.930 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.930 09:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:29.930 09:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:29.930 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.930 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.930 BaseBdev3_malloc 00:11:29.930 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.930 09:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:29.930 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.930 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.930 true 00:11:29.930 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.930 09:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:29.930 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.930 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.930 [2024-10-15 09:10:47.633598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:29.930 [2024-10-15 09:10:47.633736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.930 [2024-10-15 09:10:47.633772] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:29.930 [2024-10-15 09:10:47.633788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.930 [2024-10-15 09:10:47.636187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.930 [2024-10-15 09:10:47.636229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:29.930 BaseBdev3 00:11:29.930 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.930 09:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:29.931 09:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:29.931 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.931 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.931 BaseBdev4_malloc 00:11:29.931 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.931 09:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:29.931 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.931 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.931 true 00:11:29.931 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.931 09:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:29.931 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.931 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.931 [2024-10-15 09:10:47.689750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:29.931 [2024-10-15 09:10:47.689819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.931 [2024-10-15 09:10:47.689850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:29.931 [2024-10-15 09:10:47.689867] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.931 [2024-10-15 09:10:47.692150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.931 [2024-10-15 09:10:47.692193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:29.931 BaseBdev4 00:11:29.931 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.931 09:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:29.931 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.931 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.931 [2024-10-15 09:10:47.697766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:29.931 [2024-10-15 09:10:47.699636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:29.931 [2024-10-15 09:10:47.699750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:29.931 [2024-10-15 09:10:47.699840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:29.931 [2024-10-15 09:10:47.700085] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:29.931 [2024-10-15 09:10:47.700103] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:29.931 [2024-10-15 09:10:47.700360] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:29.931 [2024-10-15 09:10:47.700521] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:29.931 [2024-10-15 09:10:47.700532] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:29.931 [2024-10-15 09:10:47.700705] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.931 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.931 09:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:29.931 09:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:29.931 09:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.931 09:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.931 09:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.931 09:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.931 09:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.931 09:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.931 09:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.931 09:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.931 09:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.931 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.931 09:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.931 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.931 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.931 09:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.931 "name": "raid_bdev1", 00:11:29.931 "uuid": "0c9c12fa-ca58-4a9e-97c0-e888ea55dd61", 00:11:29.931 "strip_size_kb": 64, 00:11:29.931 "state": "online", 00:11:29.931 "raid_level": "concat", 00:11:29.931 "superblock": true, 00:11:29.931 "num_base_bdevs": 4, 00:11:29.931 "num_base_bdevs_discovered": 4, 00:11:29.931 "num_base_bdevs_operational": 4, 00:11:29.931 "base_bdevs_list": [ 00:11:29.931 { 00:11:29.931 "name": "BaseBdev1", 00:11:29.931 "uuid": "457cb68f-36f2-5e91-b378-7a442b00714f", 00:11:29.931 "is_configured": true, 00:11:29.931 "data_offset": 2048, 00:11:29.931 "data_size": 63488 00:11:29.931 }, 00:11:29.931 { 00:11:29.931 "name": "BaseBdev2", 00:11:29.931 "uuid": "861b1837-cf00-5186-be90-0e0f54b13da1", 00:11:29.931 "is_configured": true, 00:11:29.931 "data_offset": 2048, 00:11:29.931 "data_size": 63488 00:11:29.931 }, 00:11:29.931 { 00:11:29.931 "name": "BaseBdev3", 00:11:29.931 "uuid": "254c9cc5-6707-5b86-ad57-d6c9bdeac5da", 00:11:29.931 "is_configured": true, 00:11:29.931 "data_offset": 2048, 00:11:29.931 "data_size": 63488 00:11:29.931 }, 00:11:29.931 { 00:11:29.931 "name": "BaseBdev4", 00:11:29.931 "uuid": "c551bba6-4b66-50dd-af29-1c217dada83c", 00:11:29.931 "is_configured": true, 00:11:29.931 "data_offset": 2048, 00:11:29.931 "data_size": 63488 00:11:29.931 } 00:11:29.931 ] 00:11:29.931 }' 00:11:29.931 09:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.931 09:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.500 09:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:30.500 09:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:30.500 [2024-10-15 09:10:48.290155] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:31.436 09:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:31.436 09:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.436 09:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.436 09:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.436 09:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:31.436 09:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:31.436 09:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:31.436 09:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:31.436 09:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:31.436 09:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.436 09:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:31.436 09:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.436 09:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.436 09:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.436 09:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.436 09:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.436 09:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.436 09:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.436 09:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.436 09:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.436 09:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.436 09:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.436 09:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.436 "name": "raid_bdev1", 00:11:31.436 "uuid": "0c9c12fa-ca58-4a9e-97c0-e888ea55dd61", 00:11:31.436 "strip_size_kb": 64, 00:11:31.436 "state": "online", 00:11:31.436 "raid_level": "concat", 00:11:31.436 "superblock": true, 00:11:31.436 "num_base_bdevs": 4, 00:11:31.436 "num_base_bdevs_discovered": 4, 00:11:31.436 "num_base_bdevs_operational": 4, 00:11:31.436 "base_bdevs_list": [ 00:11:31.436 { 00:11:31.436 "name": "BaseBdev1", 00:11:31.436 "uuid": "457cb68f-36f2-5e91-b378-7a442b00714f", 00:11:31.436 "is_configured": true, 00:11:31.436 "data_offset": 2048, 00:11:31.436 "data_size": 63488 00:11:31.436 }, 00:11:31.436 { 00:11:31.436 "name": "BaseBdev2", 00:11:31.436 "uuid": "861b1837-cf00-5186-be90-0e0f54b13da1", 00:11:31.436 "is_configured": true, 00:11:31.436 "data_offset": 2048, 00:11:31.436 "data_size": 63488 00:11:31.436 }, 00:11:31.436 { 00:11:31.436 "name": "BaseBdev3", 00:11:31.436 "uuid": "254c9cc5-6707-5b86-ad57-d6c9bdeac5da", 00:11:31.436 "is_configured": true, 00:11:31.436 "data_offset": 2048, 00:11:31.436 "data_size": 63488 00:11:31.436 }, 00:11:31.436 { 00:11:31.436 "name": "BaseBdev4", 00:11:31.436 "uuid": "c551bba6-4b66-50dd-af29-1c217dada83c", 00:11:31.436 "is_configured": true, 00:11:31.436 "data_offset": 2048, 00:11:31.436 "data_size": 63488 00:11:31.436 } 00:11:31.436 ] 00:11:31.436 }' 00:11:31.436 09:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.436 09:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.005 09:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:32.005 09:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.005 09:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.005 [2024-10-15 09:10:49.666680] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:32.005 [2024-10-15 09:10:49.666795] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:32.005 [2024-10-15 09:10:49.670112] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:32.005 [2024-10-15 09:10:49.670230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.005 [2024-10-15 09:10:49.670327] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:32.005 [2024-10-15 09:10:49.670401] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:32.005 { 00:11:32.005 "results": [ 00:11:32.005 { 00:11:32.005 "job": "raid_bdev1", 00:11:32.005 "core_mask": "0x1", 00:11:32.005 "workload": "randrw", 00:11:32.005 "percentage": 50, 00:11:32.005 "status": "finished", 00:11:32.005 "queue_depth": 1, 00:11:32.005 "io_size": 131072, 00:11:32.005 "runtime": 1.377311, 00:11:32.005 "iops": 14712.726464828931, 00:11:32.005 "mibps": 1839.0908081036164, 00:11:32.005 "io_failed": 1, 00:11:32.005 "io_timeout": 0, 00:11:32.005 "avg_latency_us": 94.63390788213378, 00:11:32.005 "min_latency_us": 26.717903930131005, 00:11:32.005 "max_latency_us": 1445.2262008733624 00:11:32.005 } 00:11:32.005 ], 00:11:32.005 "core_count": 1 00:11:32.005 } 00:11:32.005 09:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.005 09:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73149 00:11:32.005 09:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 73149 ']' 00:11:32.005 09:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 73149 00:11:32.005 09:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:32.005 09:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:32.005 09:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73149 00:11:32.005 09:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:32.005 09:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:32.005 09:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73149' 00:11:32.005 killing process with pid 73149 00:11:32.005 09:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 73149 00:11:32.005 [2024-10-15 09:10:49.715276] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:32.005 09:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 73149 00:11:32.264 [2024-10-15 09:10:50.044468] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:33.642 09:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.74ZJWUrQSU 00:11:33.642 09:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:33.642 09:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:33.642 09:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:33.642 09:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:33.642 09:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:33.642 09:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:33.642 ************************************ 00:11:33.642 END TEST raid_write_error_test 00:11:33.642 ************************************ 00:11:33.642 09:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:33.642 00:11:33.642 real 0m4.784s 00:11:33.642 user 0m5.710s 00:11:33.642 sys 0m0.589s 00:11:33.642 09:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:33.642 09:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.642 09:10:51 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:33.642 09:10:51 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:33.642 09:10:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:33.642 09:10:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:33.642 09:10:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:33.642 ************************************ 00:11:33.642 START TEST raid_state_function_test 00:11:33.642 ************************************ 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73293 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73293' 00:11:33.642 Process raid pid: 73293 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73293 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 73293 ']' 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:33.642 09:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.642 [2024-10-15 09:10:51.438200] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:11:33.642 [2024-10-15 09:10:51.438445] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.901 [2024-10-15 09:10:51.613438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.901 [2024-10-15 09:10:51.736892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.159 [2024-10-15 09:10:51.949174] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:34.159 [2024-10-15 09:10:51.949306] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:34.419 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:34.419 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:11:34.419 09:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:34.419 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.419 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.419 [2024-10-15 09:10:52.290163] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:34.419 [2024-10-15 09:10:52.290297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:34.419 [2024-10-15 09:10:52.290317] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:34.419 [2024-10-15 09:10:52.290330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:34.419 [2024-10-15 09:10:52.290339] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:34.419 [2024-10-15 09:10:52.290350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:34.419 [2024-10-15 09:10:52.290359] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:34.419 [2024-10-15 09:10:52.290370] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:34.419 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.419 09:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:34.419 09:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.419 09:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.419 09:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.419 09:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.419 09:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.419 09:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.419 09:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.419 09:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.419 09:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.419 09:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.419 09:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.419 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.419 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.679 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.679 09:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.679 "name": "Existed_Raid", 00:11:34.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.679 "strip_size_kb": 0, 00:11:34.679 "state": "configuring", 00:11:34.679 "raid_level": "raid1", 00:11:34.679 "superblock": false, 00:11:34.679 "num_base_bdevs": 4, 00:11:34.679 "num_base_bdevs_discovered": 0, 00:11:34.679 "num_base_bdevs_operational": 4, 00:11:34.680 "base_bdevs_list": [ 00:11:34.680 { 00:11:34.680 "name": "BaseBdev1", 00:11:34.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.680 "is_configured": false, 00:11:34.680 "data_offset": 0, 00:11:34.680 "data_size": 0 00:11:34.680 }, 00:11:34.680 { 00:11:34.680 "name": "BaseBdev2", 00:11:34.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.680 "is_configured": false, 00:11:34.680 "data_offset": 0, 00:11:34.680 "data_size": 0 00:11:34.680 }, 00:11:34.680 { 00:11:34.680 "name": "BaseBdev3", 00:11:34.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.680 "is_configured": false, 00:11:34.680 "data_offset": 0, 00:11:34.680 "data_size": 0 00:11:34.680 }, 00:11:34.680 { 00:11:34.680 "name": "BaseBdev4", 00:11:34.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.680 "is_configured": false, 00:11:34.680 "data_offset": 0, 00:11:34.680 "data_size": 0 00:11:34.680 } 00:11:34.680 ] 00:11:34.680 }' 00:11:34.680 09:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.680 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.942 09:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:34.942 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.942 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.942 [2024-10-15 09:10:52.749325] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:34.942 [2024-10-15 09:10:52.749437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:34.942 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.942 09:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:34.942 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.942 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.942 [2024-10-15 09:10:52.757327] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:34.942 [2024-10-15 09:10:52.757428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:34.942 [2024-10-15 09:10:52.757482] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:34.942 [2024-10-15 09:10:52.757550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:34.942 [2024-10-15 09:10:52.757597] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:34.942 [2024-10-15 09:10:52.757651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:34.942 [2024-10-15 09:10:52.757726] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:34.942 [2024-10-15 09:10:52.757787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:34.942 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.942 09:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:34.942 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.943 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.943 [2024-10-15 09:10:52.801830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:34.943 BaseBdev1 00:11:34.943 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.943 09:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:34.943 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:34.943 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:34.943 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:34.943 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:34.943 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:34.943 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:34.943 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.943 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.943 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.943 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:34.943 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.943 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.943 [ 00:11:34.943 { 00:11:34.943 "name": "BaseBdev1", 00:11:34.943 "aliases": [ 00:11:34.943 "071c5cf1-a723-4b1f-baac-8eee37184ad2" 00:11:34.943 ], 00:11:34.943 "product_name": "Malloc disk", 00:11:34.943 "block_size": 512, 00:11:34.943 "num_blocks": 65536, 00:11:34.943 "uuid": "071c5cf1-a723-4b1f-baac-8eee37184ad2", 00:11:34.943 "assigned_rate_limits": { 00:11:34.943 "rw_ios_per_sec": 0, 00:11:34.943 "rw_mbytes_per_sec": 0, 00:11:34.943 "r_mbytes_per_sec": 0, 00:11:34.943 "w_mbytes_per_sec": 0 00:11:34.943 }, 00:11:34.943 "claimed": true, 00:11:34.943 "claim_type": "exclusive_write", 00:11:34.943 "zoned": false, 00:11:34.943 "supported_io_types": { 00:11:34.943 "read": true, 00:11:34.943 "write": true, 00:11:34.943 "unmap": true, 00:11:34.943 "flush": true, 00:11:34.943 "reset": true, 00:11:34.943 "nvme_admin": false, 00:11:34.943 "nvme_io": false, 00:11:34.943 "nvme_io_md": false, 00:11:34.943 "write_zeroes": true, 00:11:34.943 "zcopy": true, 00:11:34.943 "get_zone_info": false, 00:11:34.943 "zone_management": false, 00:11:34.943 "zone_append": false, 00:11:34.943 "compare": false, 00:11:34.943 "compare_and_write": false, 00:11:34.943 "abort": true, 00:11:34.943 "seek_hole": false, 00:11:34.943 "seek_data": false, 00:11:34.943 "copy": true, 00:11:34.943 "nvme_iov_md": false 00:11:34.943 }, 00:11:34.943 "memory_domains": [ 00:11:34.943 { 00:11:34.943 "dma_device_id": "system", 00:11:34.943 "dma_device_type": 1 00:11:34.943 }, 00:11:34.943 { 00:11:34.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.943 "dma_device_type": 2 00:11:34.943 } 00:11:34.943 ], 00:11:34.943 "driver_specific": {} 00:11:34.943 } 00:11:34.943 ] 00:11:34.943 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.943 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:34.943 09:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:34.943 09:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.943 09:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.943 09:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.943 09:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.943 09:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.943 09:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.943 09:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.943 09:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.943 09:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.202 09:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.202 09:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.202 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.202 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.202 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.202 09:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.202 "name": "Existed_Raid", 00:11:35.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.202 "strip_size_kb": 0, 00:11:35.202 "state": "configuring", 00:11:35.202 "raid_level": "raid1", 00:11:35.202 "superblock": false, 00:11:35.202 "num_base_bdevs": 4, 00:11:35.202 "num_base_bdevs_discovered": 1, 00:11:35.202 "num_base_bdevs_operational": 4, 00:11:35.202 "base_bdevs_list": [ 00:11:35.202 { 00:11:35.202 "name": "BaseBdev1", 00:11:35.202 "uuid": "071c5cf1-a723-4b1f-baac-8eee37184ad2", 00:11:35.202 "is_configured": true, 00:11:35.202 "data_offset": 0, 00:11:35.202 "data_size": 65536 00:11:35.202 }, 00:11:35.202 { 00:11:35.202 "name": "BaseBdev2", 00:11:35.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.202 "is_configured": false, 00:11:35.202 "data_offset": 0, 00:11:35.202 "data_size": 0 00:11:35.202 }, 00:11:35.202 { 00:11:35.202 "name": "BaseBdev3", 00:11:35.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.202 "is_configured": false, 00:11:35.202 "data_offset": 0, 00:11:35.202 "data_size": 0 00:11:35.202 }, 00:11:35.202 { 00:11:35.202 "name": "BaseBdev4", 00:11:35.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.202 "is_configured": false, 00:11:35.202 "data_offset": 0, 00:11:35.202 "data_size": 0 00:11:35.202 } 00:11:35.202 ] 00:11:35.202 }' 00:11:35.202 09:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.202 09:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.462 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:35.462 09:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.462 09:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.462 [2024-10-15 09:10:53.301039] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:35.462 [2024-10-15 09:10:53.301195] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:35.462 09:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.462 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:35.462 09:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.462 09:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.462 [2024-10-15 09:10:53.313088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:35.462 [2024-10-15 09:10:53.314960] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:35.462 [2024-10-15 09:10:53.315003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:35.462 [2024-10-15 09:10:53.315012] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:35.462 [2024-10-15 09:10:53.315023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:35.462 [2024-10-15 09:10:53.315029] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:35.462 [2024-10-15 09:10:53.315038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:35.462 09:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.462 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:35.462 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:35.462 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:35.462 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.462 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.462 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.462 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.462 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.462 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.462 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.462 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.462 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.462 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.462 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.462 09:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.462 09:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.462 09:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.721 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.721 "name": "Existed_Raid", 00:11:35.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.721 "strip_size_kb": 0, 00:11:35.721 "state": "configuring", 00:11:35.721 "raid_level": "raid1", 00:11:35.721 "superblock": false, 00:11:35.721 "num_base_bdevs": 4, 00:11:35.721 "num_base_bdevs_discovered": 1, 00:11:35.721 "num_base_bdevs_operational": 4, 00:11:35.721 "base_bdevs_list": [ 00:11:35.721 { 00:11:35.721 "name": "BaseBdev1", 00:11:35.721 "uuid": "071c5cf1-a723-4b1f-baac-8eee37184ad2", 00:11:35.721 "is_configured": true, 00:11:35.721 "data_offset": 0, 00:11:35.721 "data_size": 65536 00:11:35.721 }, 00:11:35.721 { 00:11:35.721 "name": "BaseBdev2", 00:11:35.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.721 "is_configured": false, 00:11:35.721 "data_offset": 0, 00:11:35.721 "data_size": 0 00:11:35.721 }, 00:11:35.721 { 00:11:35.721 "name": "BaseBdev3", 00:11:35.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.721 "is_configured": false, 00:11:35.721 "data_offset": 0, 00:11:35.721 "data_size": 0 00:11:35.721 }, 00:11:35.721 { 00:11:35.721 "name": "BaseBdev4", 00:11:35.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.721 "is_configured": false, 00:11:35.721 "data_offset": 0, 00:11:35.721 "data_size": 0 00:11:35.721 } 00:11:35.721 ] 00:11:35.721 }' 00:11:35.721 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.721 09:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.980 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:35.980 09:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.980 09:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.980 [2024-10-15 09:10:53.795836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:35.980 BaseBdev2 00:11:35.980 09:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.980 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:35.980 09:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:35.980 09:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:35.980 09:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:35.980 09:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:35.980 09:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:35.980 09:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:35.980 09:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.980 09:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.980 09:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.980 09:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:35.980 09:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.980 09:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.980 [ 00:11:35.980 { 00:11:35.980 "name": "BaseBdev2", 00:11:35.980 "aliases": [ 00:11:35.980 "2a52f58a-6345-4d95-b739-12c13e36e12d" 00:11:35.980 ], 00:11:35.980 "product_name": "Malloc disk", 00:11:35.980 "block_size": 512, 00:11:35.980 "num_blocks": 65536, 00:11:35.980 "uuid": "2a52f58a-6345-4d95-b739-12c13e36e12d", 00:11:35.980 "assigned_rate_limits": { 00:11:35.980 "rw_ios_per_sec": 0, 00:11:35.980 "rw_mbytes_per_sec": 0, 00:11:35.980 "r_mbytes_per_sec": 0, 00:11:35.980 "w_mbytes_per_sec": 0 00:11:35.980 }, 00:11:35.980 "claimed": true, 00:11:35.981 "claim_type": "exclusive_write", 00:11:35.981 "zoned": false, 00:11:35.981 "supported_io_types": { 00:11:35.981 "read": true, 00:11:35.981 "write": true, 00:11:35.981 "unmap": true, 00:11:35.981 "flush": true, 00:11:35.981 "reset": true, 00:11:35.981 "nvme_admin": false, 00:11:35.981 "nvme_io": false, 00:11:35.981 "nvme_io_md": false, 00:11:35.981 "write_zeroes": true, 00:11:35.981 "zcopy": true, 00:11:35.981 "get_zone_info": false, 00:11:35.981 "zone_management": false, 00:11:35.981 "zone_append": false, 00:11:35.981 "compare": false, 00:11:35.981 "compare_and_write": false, 00:11:35.981 "abort": true, 00:11:35.981 "seek_hole": false, 00:11:35.981 "seek_data": false, 00:11:35.981 "copy": true, 00:11:35.981 "nvme_iov_md": false 00:11:35.981 }, 00:11:35.981 "memory_domains": [ 00:11:35.981 { 00:11:35.981 "dma_device_id": "system", 00:11:35.981 "dma_device_type": 1 00:11:35.981 }, 00:11:35.981 { 00:11:35.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.981 "dma_device_type": 2 00:11:35.981 } 00:11:35.981 ], 00:11:35.981 "driver_specific": {} 00:11:35.981 } 00:11:35.981 ] 00:11:35.981 09:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.981 09:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:35.981 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:35.981 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:35.981 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:35.981 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.981 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.981 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.981 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.981 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.981 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.981 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.981 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.981 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.981 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.981 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.981 09:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.981 09:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.981 09:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.241 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.241 "name": "Existed_Raid", 00:11:36.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.241 "strip_size_kb": 0, 00:11:36.241 "state": "configuring", 00:11:36.241 "raid_level": "raid1", 00:11:36.241 "superblock": false, 00:11:36.241 "num_base_bdevs": 4, 00:11:36.241 "num_base_bdevs_discovered": 2, 00:11:36.241 "num_base_bdevs_operational": 4, 00:11:36.241 "base_bdevs_list": [ 00:11:36.241 { 00:11:36.241 "name": "BaseBdev1", 00:11:36.241 "uuid": "071c5cf1-a723-4b1f-baac-8eee37184ad2", 00:11:36.241 "is_configured": true, 00:11:36.241 "data_offset": 0, 00:11:36.241 "data_size": 65536 00:11:36.241 }, 00:11:36.241 { 00:11:36.241 "name": "BaseBdev2", 00:11:36.241 "uuid": "2a52f58a-6345-4d95-b739-12c13e36e12d", 00:11:36.241 "is_configured": true, 00:11:36.241 "data_offset": 0, 00:11:36.241 "data_size": 65536 00:11:36.241 }, 00:11:36.241 { 00:11:36.241 "name": "BaseBdev3", 00:11:36.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.241 "is_configured": false, 00:11:36.241 "data_offset": 0, 00:11:36.241 "data_size": 0 00:11:36.241 }, 00:11:36.241 { 00:11:36.241 "name": "BaseBdev4", 00:11:36.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.241 "is_configured": false, 00:11:36.241 "data_offset": 0, 00:11:36.241 "data_size": 0 00:11:36.241 } 00:11:36.241 ] 00:11:36.241 }' 00:11:36.241 09:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.241 09:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.500 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:36.500 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.500 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.500 [2024-10-15 09:10:54.304252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:36.500 BaseBdev3 00:11:36.500 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.500 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:36.500 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:36.500 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:36.500 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:36.500 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:36.500 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:36.500 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:36.500 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.500 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.500 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.500 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:36.500 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.500 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.500 [ 00:11:36.500 { 00:11:36.500 "name": "BaseBdev3", 00:11:36.500 "aliases": [ 00:11:36.500 "ea85bb31-f0b0-443b-abd4-d727d2761b25" 00:11:36.500 ], 00:11:36.500 "product_name": "Malloc disk", 00:11:36.500 "block_size": 512, 00:11:36.500 "num_blocks": 65536, 00:11:36.501 "uuid": "ea85bb31-f0b0-443b-abd4-d727d2761b25", 00:11:36.501 "assigned_rate_limits": { 00:11:36.501 "rw_ios_per_sec": 0, 00:11:36.501 "rw_mbytes_per_sec": 0, 00:11:36.501 "r_mbytes_per_sec": 0, 00:11:36.501 "w_mbytes_per_sec": 0 00:11:36.501 }, 00:11:36.501 "claimed": true, 00:11:36.501 "claim_type": "exclusive_write", 00:11:36.501 "zoned": false, 00:11:36.501 "supported_io_types": { 00:11:36.501 "read": true, 00:11:36.501 "write": true, 00:11:36.501 "unmap": true, 00:11:36.501 "flush": true, 00:11:36.501 "reset": true, 00:11:36.501 "nvme_admin": false, 00:11:36.501 "nvme_io": false, 00:11:36.501 "nvme_io_md": false, 00:11:36.501 "write_zeroes": true, 00:11:36.501 "zcopy": true, 00:11:36.501 "get_zone_info": false, 00:11:36.501 "zone_management": false, 00:11:36.501 "zone_append": false, 00:11:36.501 "compare": false, 00:11:36.501 "compare_and_write": false, 00:11:36.501 "abort": true, 00:11:36.501 "seek_hole": false, 00:11:36.501 "seek_data": false, 00:11:36.501 "copy": true, 00:11:36.501 "nvme_iov_md": false 00:11:36.501 }, 00:11:36.501 "memory_domains": [ 00:11:36.501 { 00:11:36.501 "dma_device_id": "system", 00:11:36.501 "dma_device_type": 1 00:11:36.501 }, 00:11:36.501 { 00:11:36.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.501 "dma_device_type": 2 00:11:36.501 } 00:11:36.501 ], 00:11:36.501 "driver_specific": {} 00:11:36.501 } 00:11:36.501 ] 00:11:36.501 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.501 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:36.501 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:36.501 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:36.501 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:36.501 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.501 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.501 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.501 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.501 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.501 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.501 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.501 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.501 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.501 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.501 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.501 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.501 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.501 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.760 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.760 "name": "Existed_Raid", 00:11:36.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.760 "strip_size_kb": 0, 00:11:36.760 "state": "configuring", 00:11:36.760 "raid_level": "raid1", 00:11:36.760 "superblock": false, 00:11:36.760 "num_base_bdevs": 4, 00:11:36.760 "num_base_bdevs_discovered": 3, 00:11:36.760 "num_base_bdevs_operational": 4, 00:11:36.760 "base_bdevs_list": [ 00:11:36.760 { 00:11:36.760 "name": "BaseBdev1", 00:11:36.760 "uuid": "071c5cf1-a723-4b1f-baac-8eee37184ad2", 00:11:36.760 "is_configured": true, 00:11:36.760 "data_offset": 0, 00:11:36.760 "data_size": 65536 00:11:36.760 }, 00:11:36.760 { 00:11:36.760 "name": "BaseBdev2", 00:11:36.760 "uuid": "2a52f58a-6345-4d95-b739-12c13e36e12d", 00:11:36.760 "is_configured": true, 00:11:36.760 "data_offset": 0, 00:11:36.760 "data_size": 65536 00:11:36.760 }, 00:11:36.760 { 00:11:36.760 "name": "BaseBdev3", 00:11:36.760 "uuid": "ea85bb31-f0b0-443b-abd4-d727d2761b25", 00:11:36.760 "is_configured": true, 00:11:36.760 "data_offset": 0, 00:11:36.760 "data_size": 65536 00:11:36.760 }, 00:11:36.760 { 00:11:36.760 "name": "BaseBdev4", 00:11:36.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.760 "is_configured": false, 00:11:36.760 "data_offset": 0, 00:11:36.760 "data_size": 0 00:11:36.760 } 00:11:36.760 ] 00:11:36.760 }' 00:11:36.760 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.760 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.020 [2024-10-15 09:10:54.862949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:37.020 [2024-10-15 09:10:54.863099] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:37.020 [2024-10-15 09:10:54.863113] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:37.020 [2024-10-15 09:10:54.863440] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:37.020 [2024-10-15 09:10:54.863619] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:37.020 [2024-10-15 09:10:54.863633] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:37.020 [2024-10-15 09:10:54.863975] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:37.020 BaseBdev4 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.020 [ 00:11:37.020 { 00:11:37.020 "name": "BaseBdev4", 00:11:37.020 "aliases": [ 00:11:37.020 "64f1c07e-6fc1-42c3-8fff-7615a22aad4d" 00:11:37.020 ], 00:11:37.020 "product_name": "Malloc disk", 00:11:37.020 "block_size": 512, 00:11:37.020 "num_blocks": 65536, 00:11:37.020 "uuid": "64f1c07e-6fc1-42c3-8fff-7615a22aad4d", 00:11:37.020 "assigned_rate_limits": { 00:11:37.020 "rw_ios_per_sec": 0, 00:11:37.020 "rw_mbytes_per_sec": 0, 00:11:37.020 "r_mbytes_per_sec": 0, 00:11:37.020 "w_mbytes_per_sec": 0 00:11:37.020 }, 00:11:37.020 "claimed": true, 00:11:37.020 "claim_type": "exclusive_write", 00:11:37.020 "zoned": false, 00:11:37.020 "supported_io_types": { 00:11:37.020 "read": true, 00:11:37.020 "write": true, 00:11:37.020 "unmap": true, 00:11:37.020 "flush": true, 00:11:37.020 "reset": true, 00:11:37.020 "nvme_admin": false, 00:11:37.020 "nvme_io": false, 00:11:37.020 "nvme_io_md": false, 00:11:37.020 "write_zeroes": true, 00:11:37.020 "zcopy": true, 00:11:37.020 "get_zone_info": false, 00:11:37.020 "zone_management": false, 00:11:37.020 "zone_append": false, 00:11:37.020 "compare": false, 00:11:37.020 "compare_and_write": false, 00:11:37.020 "abort": true, 00:11:37.020 "seek_hole": false, 00:11:37.020 "seek_data": false, 00:11:37.020 "copy": true, 00:11:37.020 "nvme_iov_md": false 00:11:37.020 }, 00:11:37.020 "memory_domains": [ 00:11:37.020 { 00:11:37.020 "dma_device_id": "system", 00:11:37.020 "dma_device_type": 1 00:11:37.020 }, 00:11:37.020 { 00:11:37.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.020 "dma_device_type": 2 00:11:37.020 } 00:11:37.020 ], 00:11:37.020 "driver_specific": {} 00:11:37.020 } 00:11:37.020 ] 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.020 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.279 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.279 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.279 "name": "Existed_Raid", 00:11:37.279 "uuid": "0567181a-6caa-48a6-b0f4-a3c12e6567b7", 00:11:37.279 "strip_size_kb": 0, 00:11:37.279 "state": "online", 00:11:37.279 "raid_level": "raid1", 00:11:37.279 "superblock": false, 00:11:37.279 "num_base_bdevs": 4, 00:11:37.279 "num_base_bdevs_discovered": 4, 00:11:37.279 "num_base_bdevs_operational": 4, 00:11:37.279 "base_bdevs_list": [ 00:11:37.279 { 00:11:37.279 "name": "BaseBdev1", 00:11:37.279 "uuid": "071c5cf1-a723-4b1f-baac-8eee37184ad2", 00:11:37.279 "is_configured": true, 00:11:37.279 "data_offset": 0, 00:11:37.279 "data_size": 65536 00:11:37.279 }, 00:11:37.279 { 00:11:37.279 "name": "BaseBdev2", 00:11:37.279 "uuid": "2a52f58a-6345-4d95-b739-12c13e36e12d", 00:11:37.279 "is_configured": true, 00:11:37.279 "data_offset": 0, 00:11:37.279 "data_size": 65536 00:11:37.279 }, 00:11:37.279 { 00:11:37.279 "name": "BaseBdev3", 00:11:37.279 "uuid": "ea85bb31-f0b0-443b-abd4-d727d2761b25", 00:11:37.279 "is_configured": true, 00:11:37.279 "data_offset": 0, 00:11:37.279 "data_size": 65536 00:11:37.279 }, 00:11:37.279 { 00:11:37.279 "name": "BaseBdev4", 00:11:37.279 "uuid": "64f1c07e-6fc1-42c3-8fff-7615a22aad4d", 00:11:37.279 "is_configured": true, 00:11:37.279 "data_offset": 0, 00:11:37.279 "data_size": 65536 00:11:37.279 } 00:11:37.279 ] 00:11:37.279 }' 00:11:37.279 09:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.279 09:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.538 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:37.538 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:37.538 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:37.538 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:37.538 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:37.538 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:37.538 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:37.538 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:37.538 09:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.538 09:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.538 [2024-10-15 09:10:55.370514] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:37.538 09:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.538 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:37.538 "name": "Existed_Raid", 00:11:37.538 "aliases": [ 00:11:37.539 "0567181a-6caa-48a6-b0f4-a3c12e6567b7" 00:11:37.539 ], 00:11:37.539 "product_name": "Raid Volume", 00:11:37.539 "block_size": 512, 00:11:37.539 "num_blocks": 65536, 00:11:37.539 "uuid": "0567181a-6caa-48a6-b0f4-a3c12e6567b7", 00:11:37.539 "assigned_rate_limits": { 00:11:37.539 "rw_ios_per_sec": 0, 00:11:37.539 "rw_mbytes_per_sec": 0, 00:11:37.539 "r_mbytes_per_sec": 0, 00:11:37.539 "w_mbytes_per_sec": 0 00:11:37.539 }, 00:11:37.539 "claimed": false, 00:11:37.539 "zoned": false, 00:11:37.539 "supported_io_types": { 00:11:37.539 "read": true, 00:11:37.539 "write": true, 00:11:37.539 "unmap": false, 00:11:37.539 "flush": false, 00:11:37.539 "reset": true, 00:11:37.539 "nvme_admin": false, 00:11:37.539 "nvme_io": false, 00:11:37.539 "nvme_io_md": false, 00:11:37.539 "write_zeroes": true, 00:11:37.539 "zcopy": false, 00:11:37.539 "get_zone_info": false, 00:11:37.539 "zone_management": false, 00:11:37.539 "zone_append": false, 00:11:37.539 "compare": false, 00:11:37.539 "compare_and_write": false, 00:11:37.539 "abort": false, 00:11:37.539 "seek_hole": false, 00:11:37.539 "seek_data": false, 00:11:37.539 "copy": false, 00:11:37.539 "nvme_iov_md": false 00:11:37.539 }, 00:11:37.539 "memory_domains": [ 00:11:37.539 { 00:11:37.539 "dma_device_id": "system", 00:11:37.539 "dma_device_type": 1 00:11:37.539 }, 00:11:37.539 { 00:11:37.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.539 "dma_device_type": 2 00:11:37.539 }, 00:11:37.539 { 00:11:37.539 "dma_device_id": "system", 00:11:37.539 "dma_device_type": 1 00:11:37.539 }, 00:11:37.539 { 00:11:37.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.539 "dma_device_type": 2 00:11:37.539 }, 00:11:37.539 { 00:11:37.539 "dma_device_id": "system", 00:11:37.539 "dma_device_type": 1 00:11:37.539 }, 00:11:37.539 { 00:11:37.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.539 "dma_device_type": 2 00:11:37.539 }, 00:11:37.539 { 00:11:37.539 "dma_device_id": "system", 00:11:37.539 "dma_device_type": 1 00:11:37.539 }, 00:11:37.539 { 00:11:37.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.539 "dma_device_type": 2 00:11:37.539 } 00:11:37.539 ], 00:11:37.539 "driver_specific": { 00:11:37.539 "raid": { 00:11:37.539 "uuid": "0567181a-6caa-48a6-b0f4-a3c12e6567b7", 00:11:37.539 "strip_size_kb": 0, 00:11:37.539 "state": "online", 00:11:37.539 "raid_level": "raid1", 00:11:37.539 "superblock": false, 00:11:37.539 "num_base_bdevs": 4, 00:11:37.539 "num_base_bdevs_discovered": 4, 00:11:37.539 "num_base_bdevs_operational": 4, 00:11:37.539 "base_bdevs_list": [ 00:11:37.539 { 00:11:37.539 "name": "BaseBdev1", 00:11:37.539 "uuid": "071c5cf1-a723-4b1f-baac-8eee37184ad2", 00:11:37.539 "is_configured": true, 00:11:37.539 "data_offset": 0, 00:11:37.539 "data_size": 65536 00:11:37.539 }, 00:11:37.539 { 00:11:37.539 "name": "BaseBdev2", 00:11:37.539 "uuid": "2a52f58a-6345-4d95-b739-12c13e36e12d", 00:11:37.539 "is_configured": true, 00:11:37.539 "data_offset": 0, 00:11:37.539 "data_size": 65536 00:11:37.539 }, 00:11:37.539 { 00:11:37.539 "name": "BaseBdev3", 00:11:37.539 "uuid": "ea85bb31-f0b0-443b-abd4-d727d2761b25", 00:11:37.539 "is_configured": true, 00:11:37.539 "data_offset": 0, 00:11:37.539 "data_size": 65536 00:11:37.539 }, 00:11:37.539 { 00:11:37.539 "name": "BaseBdev4", 00:11:37.539 "uuid": "64f1c07e-6fc1-42c3-8fff-7615a22aad4d", 00:11:37.539 "is_configured": true, 00:11:37.539 "data_offset": 0, 00:11:37.539 "data_size": 65536 00:11:37.539 } 00:11:37.539 ] 00:11:37.539 } 00:11:37.539 } 00:11:37.539 }' 00:11:37.539 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:37.798 BaseBdev2 00:11:37.798 BaseBdev3 00:11:37.798 BaseBdev4' 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.798 09:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.798 [2024-10-15 09:10:55.661759] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:38.085 09:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.085 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:38.085 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:38.085 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:38.085 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:38.085 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:38.085 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:38.085 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.085 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.086 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.086 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.086 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:38.086 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.086 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.086 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.086 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.086 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.086 09:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.086 09:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.086 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.086 09:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.086 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.086 "name": "Existed_Raid", 00:11:38.086 "uuid": "0567181a-6caa-48a6-b0f4-a3c12e6567b7", 00:11:38.086 "strip_size_kb": 0, 00:11:38.086 "state": "online", 00:11:38.086 "raid_level": "raid1", 00:11:38.086 "superblock": false, 00:11:38.086 "num_base_bdevs": 4, 00:11:38.086 "num_base_bdevs_discovered": 3, 00:11:38.086 "num_base_bdevs_operational": 3, 00:11:38.086 "base_bdevs_list": [ 00:11:38.086 { 00:11:38.086 "name": null, 00:11:38.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.086 "is_configured": false, 00:11:38.086 "data_offset": 0, 00:11:38.086 "data_size": 65536 00:11:38.086 }, 00:11:38.086 { 00:11:38.086 "name": "BaseBdev2", 00:11:38.086 "uuid": "2a52f58a-6345-4d95-b739-12c13e36e12d", 00:11:38.086 "is_configured": true, 00:11:38.086 "data_offset": 0, 00:11:38.086 "data_size": 65536 00:11:38.086 }, 00:11:38.086 { 00:11:38.086 "name": "BaseBdev3", 00:11:38.086 "uuid": "ea85bb31-f0b0-443b-abd4-d727d2761b25", 00:11:38.086 "is_configured": true, 00:11:38.086 "data_offset": 0, 00:11:38.086 "data_size": 65536 00:11:38.086 }, 00:11:38.086 { 00:11:38.086 "name": "BaseBdev4", 00:11:38.086 "uuid": "64f1c07e-6fc1-42c3-8fff-7615a22aad4d", 00:11:38.086 "is_configured": true, 00:11:38.086 "data_offset": 0, 00:11:38.086 "data_size": 65536 00:11:38.086 } 00:11:38.086 ] 00:11:38.086 }' 00:11:38.086 09:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.086 09:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.345 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:38.345 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:38.628 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.628 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:38.628 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.628 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.628 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.628 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:38.628 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:38.628 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:38.628 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.628 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.628 [2024-10-15 09:10:56.301079] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:38.628 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.628 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:38.628 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:38.628 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.628 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:38.628 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.628 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.628 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.628 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:38.628 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:38.628 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:38.628 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.628 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.628 [2024-10-15 09:10:56.457462] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:38.888 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.888 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:38.888 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:38.888 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.888 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:38.888 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.888 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.888 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.888 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:38.888 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:38.888 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:38.888 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.888 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.888 [2024-10-15 09:10:56.611871] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:38.888 [2024-10-15 09:10:56.612047] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:38.888 [2024-10-15 09:10:56.707093] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:38.888 [2024-10-15 09:10:56.707155] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:38.888 [2024-10-15 09:10:56.707167] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:38.888 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.888 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:38.888 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:38.888 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.888 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:38.888 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.888 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.888 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.888 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:38.888 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:38.888 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:38.888 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:38.888 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:38.888 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:38.888 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.888 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.147 BaseBdev2 00:11:39.147 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.147 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:39.147 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.148 [ 00:11:39.148 { 00:11:39.148 "name": "BaseBdev2", 00:11:39.148 "aliases": [ 00:11:39.148 "195b0fd9-80eb-4bca-b68b-df99a0c86362" 00:11:39.148 ], 00:11:39.148 "product_name": "Malloc disk", 00:11:39.148 "block_size": 512, 00:11:39.148 "num_blocks": 65536, 00:11:39.148 "uuid": "195b0fd9-80eb-4bca-b68b-df99a0c86362", 00:11:39.148 "assigned_rate_limits": { 00:11:39.148 "rw_ios_per_sec": 0, 00:11:39.148 "rw_mbytes_per_sec": 0, 00:11:39.148 "r_mbytes_per_sec": 0, 00:11:39.148 "w_mbytes_per_sec": 0 00:11:39.148 }, 00:11:39.148 "claimed": false, 00:11:39.148 "zoned": false, 00:11:39.148 "supported_io_types": { 00:11:39.148 "read": true, 00:11:39.148 "write": true, 00:11:39.148 "unmap": true, 00:11:39.148 "flush": true, 00:11:39.148 "reset": true, 00:11:39.148 "nvme_admin": false, 00:11:39.148 "nvme_io": false, 00:11:39.148 "nvme_io_md": false, 00:11:39.148 "write_zeroes": true, 00:11:39.148 "zcopy": true, 00:11:39.148 "get_zone_info": false, 00:11:39.148 "zone_management": false, 00:11:39.148 "zone_append": false, 00:11:39.148 "compare": false, 00:11:39.148 "compare_and_write": false, 00:11:39.148 "abort": true, 00:11:39.148 "seek_hole": false, 00:11:39.148 "seek_data": false, 00:11:39.148 "copy": true, 00:11:39.148 "nvme_iov_md": false 00:11:39.148 }, 00:11:39.148 "memory_domains": [ 00:11:39.148 { 00:11:39.148 "dma_device_id": "system", 00:11:39.148 "dma_device_type": 1 00:11:39.148 }, 00:11:39.148 { 00:11:39.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.148 "dma_device_type": 2 00:11:39.148 } 00:11:39.148 ], 00:11:39.148 "driver_specific": {} 00:11:39.148 } 00:11:39.148 ] 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.148 BaseBdev3 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.148 [ 00:11:39.148 { 00:11:39.148 "name": "BaseBdev3", 00:11:39.148 "aliases": [ 00:11:39.148 "ffda5b2a-3779-45d6-aa08-a093a194e059" 00:11:39.148 ], 00:11:39.148 "product_name": "Malloc disk", 00:11:39.148 "block_size": 512, 00:11:39.148 "num_blocks": 65536, 00:11:39.148 "uuid": "ffda5b2a-3779-45d6-aa08-a093a194e059", 00:11:39.148 "assigned_rate_limits": { 00:11:39.148 "rw_ios_per_sec": 0, 00:11:39.148 "rw_mbytes_per_sec": 0, 00:11:39.148 "r_mbytes_per_sec": 0, 00:11:39.148 "w_mbytes_per_sec": 0 00:11:39.148 }, 00:11:39.148 "claimed": false, 00:11:39.148 "zoned": false, 00:11:39.148 "supported_io_types": { 00:11:39.148 "read": true, 00:11:39.148 "write": true, 00:11:39.148 "unmap": true, 00:11:39.148 "flush": true, 00:11:39.148 "reset": true, 00:11:39.148 "nvme_admin": false, 00:11:39.148 "nvme_io": false, 00:11:39.148 "nvme_io_md": false, 00:11:39.148 "write_zeroes": true, 00:11:39.148 "zcopy": true, 00:11:39.148 "get_zone_info": false, 00:11:39.148 "zone_management": false, 00:11:39.148 "zone_append": false, 00:11:39.148 "compare": false, 00:11:39.148 "compare_and_write": false, 00:11:39.148 "abort": true, 00:11:39.148 "seek_hole": false, 00:11:39.148 "seek_data": false, 00:11:39.148 "copy": true, 00:11:39.148 "nvme_iov_md": false 00:11:39.148 }, 00:11:39.148 "memory_domains": [ 00:11:39.148 { 00:11:39.148 "dma_device_id": "system", 00:11:39.148 "dma_device_type": 1 00:11:39.148 }, 00:11:39.148 { 00:11:39.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.148 "dma_device_type": 2 00:11:39.148 } 00:11:39.148 ], 00:11:39.148 "driver_specific": {} 00:11:39.148 } 00:11:39.148 ] 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.148 BaseBdev4 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.148 09:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.148 [ 00:11:39.148 { 00:11:39.148 "name": "BaseBdev4", 00:11:39.148 "aliases": [ 00:11:39.148 "96c62a01-588f-4ef5-b1c1-09f232b03718" 00:11:39.148 ], 00:11:39.148 "product_name": "Malloc disk", 00:11:39.148 "block_size": 512, 00:11:39.148 "num_blocks": 65536, 00:11:39.148 "uuid": "96c62a01-588f-4ef5-b1c1-09f232b03718", 00:11:39.148 "assigned_rate_limits": { 00:11:39.148 "rw_ios_per_sec": 0, 00:11:39.148 "rw_mbytes_per_sec": 0, 00:11:39.148 "r_mbytes_per_sec": 0, 00:11:39.148 "w_mbytes_per_sec": 0 00:11:39.148 }, 00:11:39.148 "claimed": false, 00:11:39.148 "zoned": false, 00:11:39.148 "supported_io_types": { 00:11:39.148 "read": true, 00:11:39.148 "write": true, 00:11:39.148 "unmap": true, 00:11:39.148 "flush": true, 00:11:39.148 "reset": true, 00:11:39.148 "nvme_admin": false, 00:11:39.148 "nvme_io": false, 00:11:39.148 "nvme_io_md": false, 00:11:39.148 "write_zeroes": true, 00:11:39.148 "zcopy": true, 00:11:39.148 "get_zone_info": false, 00:11:39.148 "zone_management": false, 00:11:39.148 "zone_append": false, 00:11:39.148 "compare": false, 00:11:39.148 "compare_and_write": false, 00:11:39.148 "abort": true, 00:11:39.148 "seek_hole": false, 00:11:39.148 "seek_data": false, 00:11:39.148 "copy": true, 00:11:39.148 "nvme_iov_md": false 00:11:39.148 }, 00:11:39.148 "memory_domains": [ 00:11:39.148 { 00:11:39.148 "dma_device_id": "system", 00:11:39.148 "dma_device_type": 1 00:11:39.148 }, 00:11:39.148 { 00:11:39.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.148 "dma_device_type": 2 00:11:39.148 } 00:11:39.148 ], 00:11:39.148 "driver_specific": {} 00:11:39.148 } 00:11:39.148 ] 00:11:39.149 09:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.149 09:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:39.149 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:39.149 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:39.149 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:39.149 09:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.149 09:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.149 [2024-10-15 09:10:57.016629] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:39.149 [2024-10-15 09:10:57.016806] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:39.149 [2024-10-15 09:10:57.016868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:39.149 [2024-10-15 09:10:57.018912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:39.149 [2024-10-15 09:10:57.019008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:39.149 09:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.149 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:39.149 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.149 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.149 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.149 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.149 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.149 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.149 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.149 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.149 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.149 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.149 09:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.149 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.149 09:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.407 09:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.407 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.407 "name": "Existed_Raid", 00:11:39.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.407 "strip_size_kb": 0, 00:11:39.407 "state": "configuring", 00:11:39.407 "raid_level": "raid1", 00:11:39.407 "superblock": false, 00:11:39.407 "num_base_bdevs": 4, 00:11:39.407 "num_base_bdevs_discovered": 3, 00:11:39.407 "num_base_bdevs_operational": 4, 00:11:39.407 "base_bdevs_list": [ 00:11:39.407 { 00:11:39.407 "name": "BaseBdev1", 00:11:39.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.407 "is_configured": false, 00:11:39.407 "data_offset": 0, 00:11:39.407 "data_size": 0 00:11:39.407 }, 00:11:39.407 { 00:11:39.407 "name": "BaseBdev2", 00:11:39.407 "uuid": "195b0fd9-80eb-4bca-b68b-df99a0c86362", 00:11:39.407 "is_configured": true, 00:11:39.407 "data_offset": 0, 00:11:39.407 "data_size": 65536 00:11:39.407 }, 00:11:39.407 { 00:11:39.407 "name": "BaseBdev3", 00:11:39.407 "uuid": "ffda5b2a-3779-45d6-aa08-a093a194e059", 00:11:39.407 "is_configured": true, 00:11:39.407 "data_offset": 0, 00:11:39.407 "data_size": 65536 00:11:39.407 }, 00:11:39.407 { 00:11:39.407 "name": "BaseBdev4", 00:11:39.407 "uuid": "96c62a01-588f-4ef5-b1c1-09f232b03718", 00:11:39.407 "is_configured": true, 00:11:39.407 "data_offset": 0, 00:11:39.407 "data_size": 65536 00:11:39.407 } 00:11:39.407 ] 00:11:39.407 }' 00:11:39.407 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.407 09:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.666 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:39.667 09:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.667 09:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.667 [2024-10-15 09:10:57.479871] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:39.667 09:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.667 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:39.667 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.667 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.667 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.667 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.667 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.667 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.667 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.667 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.667 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.667 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.667 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.667 09:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.667 09:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.667 09:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.667 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.667 "name": "Existed_Raid", 00:11:39.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.667 "strip_size_kb": 0, 00:11:39.667 "state": "configuring", 00:11:39.667 "raid_level": "raid1", 00:11:39.667 "superblock": false, 00:11:39.667 "num_base_bdevs": 4, 00:11:39.667 "num_base_bdevs_discovered": 2, 00:11:39.667 "num_base_bdevs_operational": 4, 00:11:39.667 "base_bdevs_list": [ 00:11:39.667 { 00:11:39.667 "name": "BaseBdev1", 00:11:39.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.667 "is_configured": false, 00:11:39.667 "data_offset": 0, 00:11:39.667 "data_size": 0 00:11:39.667 }, 00:11:39.667 { 00:11:39.667 "name": null, 00:11:39.667 "uuid": "195b0fd9-80eb-4bca-b68b-df99a0c86362", 00:11:39.667 "is_configured": false, 00:11:39.667 "data_offset": 0, 00:11:39.667 "data_size": 65536 00:11:39.667 }, 00:11:39.667 { 00:11:39.667 "name": "BaseBdev3", 00:11:39.667 "uuid": "ffda5b2a-3779-45d6-aa08-a093a194e059", 00:11:39.667 "is_configured": true, 00:11:39.667 "data_offset": 0, 00:11:39.667 "data_size": 65536 00:11:39.667 }, 00:11:39.667 { 00:11:39.667 "name": "BaseBdev4", 00:11:39.667 "uuid": "96c62a01-588f-4ef5-b1c1-09f232b03718", 00:11:39.667 "is_configured": true, 00:11:39.667 "data_offset": 0, 00:11:39.667 "data_size": 65536 00:11:39.667 } 00:11:39.667 ] 00:11:39.667 }' 00:11:39.667 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.667 09:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.243 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.243 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:40.243 09:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.243 09:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.243 09:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.243 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:40.243 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:40.243 09:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.243 09:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.243 [2024-10-15 09:10:57.972780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:40.243 BaseBdev1 00:11:40.243 09:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.243 09:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:40.243 09:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:40.243 09:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:40.243 09:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:40.243 09:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:40.243 09:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:40.243 09:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:40.243 09:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.243 09:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.243 09:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.243 09:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:40.243 09:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.243 09:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.243 [ 00:11:40.243 { 00:11:40.243 "name": "BaseBdev1", 00:11:40.243 "aliases": [ 00:11:40.243 "dc9070f6-683c-46db-ad1a-ae5546fb29fc" 00:11:40.243 ], 00:11:40.243 "product_name": "Malloc disk", 00:11:40.243 "block_size": 512, 00:11:40.243 "num_blocks": 65536, 00:11:40.243 "uuid": "dc9070f6-683c-46db-ad1a-ae5546fb29fc", 00:11:40.243 "assigned_rate_limits": { 00:11:40.243 "rw_ios_per_sec": 0, 00:11:40.243 "rw_mbytes_per_sec": 0, 00:11:40.243 "r_mbytes_per_sec": 0, 00:11:40.243 "w_mbytes_per_sec": 0 00:11:40.243 }, 00:11:40.243 "claimed": true, 00:11:40.243 "claim_type": "exclusive_write", 00:11:40.243 "zoned": false, 00:11:40.243 "supported_io_types": { 00:11:40.243 "read": true, 00:11:40.243 "write": true, 00:11:40.243 "unmap": true, 00:11:40.243 "flush": true, 00:11:40.243 "reset": true, 00:11:40.243 "nvme_admin": false, 00:11:40.243 "nvme_io": false, 00:11:40.243 "nvme_io_md": false, 00:11:40.243 "write_zeroes": true, 00:11:40.243 "zcopy": true, 00:11:40.243 "get_zone_info": false, 00:11:40.243 "zone_management": false, 00:11:40.243 "zone_append": false, 00:11:40.243 "compare": false, 00:11:40.243 "compare_and_write": false, 00:11:40.243 "abort": true, 00:11:40.243 "seek_hole": false, 00:11:40.243 "seek_data": false, 00:11:40.243 "copy": true, 00:11:40.243 "nvme_iov_md": false 00:11:40.243 }, 00:11:40.243 "memory_domains": [ 00:11:40.243 { 00:11:40.243 "dma_device_id": "system", 00:11:40.243 "dma_device_type": 1 00:11:40.243 }, 00:11:40.243 { 00:11:40.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.243 "dma_device_type": 2 00:11:40.243 } 00:11:40.243 ], 00:11:40.243 "driver_specific": {} 00:11:40.243 } 00:11:40.243 ] 00:11:40.243 09:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.243 09:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:40.243 09:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:40.243 09:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.243 09:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.243 09:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.243 09:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.243 09:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.243 09:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.243 09:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.243 09:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.243 09:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.243 09:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.243 09:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.243 09:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.243 09:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.243 09:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.243 09:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.243 "name": "Existed_Raid", 00:11:40.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.243 "strip_size_kb": 0, 00:11:40.243 "state": "configuring", 00:11:40.243 "raid_level": "raid1", 00:11:40.243 "superblock": false, 00:11:40.243 "num_base_bdevs": 4, 00:11:40.243 "num_base_bdevs_discovered": 3, 00:11:40.243 "num_base_bdevs_operational": 4, 00:11:40.243 "base_bdevs_list": [ 00:11:40.243 { 00:11:40.243 "name": "BaseBdev1", 00:11:40.243 "uuid": "dc9070f6-683c-46db-ad1a-ae5546fb29fc", 00:11:40.243 "is_configured": true, 00:11:40.243 "data_offset": 0, 00:11:40.243 "data_size": 65536 00:11:40.243 }, 00:11:40.243 { 00:11:40.243 "name": null, 00:11:40.243 "uuid": "195b0fd9-80eb-4bca-b68b-df99a0c86362", 00:11:40.243 "is_configured": false, 00:11:40.243 "data_offset": 0, 00:11:40.243 "data_size": 65536 00:11:40.243 }, 00:11:40.243 { 00:11:40.243 "name": "BaseBdev3", 00:11:40.243 "uuid": "ffda5b2a-3779-45d6-aa08-a093a194e059", 00:11:40.243 "is_configured": true, 00:11:40.243 "data_offset": 0, 00:11:40.243 "data_size": 65536 00:11:40.243 }, 00:11:40.243 { 00:11:40.243 "name": "BaseBdev4", 00:11:40.243 "uuid": "96c62a01-588f-4ef5-b1c1-09f232b03718", 00:11:40.243 "is_configured": true, 00:11:40.243 "data_offset": 0, 00:11:40.243 "data_size": 65536 00:11:40.243 } 00:11:40.243 ] 00:11:40.243 }' 00:11:40.243 09:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.243 09:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.810 09:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.810 09:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.810 09:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.810 09:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:40.810 09:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.810 09:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:40.810 09:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:40.810 09:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.810 09:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.810 [2024-10-15 09:10:58.551887] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:40.810 09:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.810 09:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:40.810 09:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.810 09:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.810 09:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.810 09:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.810 09:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.810 09:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.810 09:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.810 09:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.810 09:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.811 09:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.811 09:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.811 09:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.811 09:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.811 09:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.811 09:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.811 "name": "Existed_Raid", 00:11:40.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.811 "strip_size_kb": 0, 00:11:40.811 "state": "configuring", 00:11:40.811 "raid_level": "raid1", 00:11:40.811 "superblock": false, 00:11:40.811 "num_base_bdevs": 4, 00:11:40.811 "num_base_bdevs_discovered": 2, 00:11:40.811 "num_base_bdevs_operational": 4, 00:11:40.811 "base_bdevs_list": [ 00:11:40.811 { 00:11:40.811 "name": "BaseBdev1", 00:11:40.811 "uuid": "dc9070f6-683c-46db-ad1a-ae5546fb29fc", 00:11:40.811 "is_configured": true, 00:11:40.811 "data_offset": 0, 00:11:40.811 "data_size": 65536 00:11:40.811 }, 00:11:40.811 { 00:11:40.811 "name": null, 00:11:40.811 "uuid": "195b0fd9-80eb-4bca-b68b-df99a0c86362", 00:11:40.811 "is_configured": false, 00:11:40.811 "data_offset": 0, 00:11:40.811 "data_size": 65536 00:11:40.811 }, 00:11:40.811 { 00:11:40.811 "name": null, 00:11:40.811 "uuid": "ffda5b2a-3779-45d6-aa08-a093a194e059", 00:11:40.811 "is_configured": false, 00:11:40.811 "data_offset": 0, 00:11:40.811 "data_size": 65536 00:11:40.811 }, 00:11:40.811 { 00:11:40.811 "name": "BaseBdev4", 00:11:40.811 "uuid": "96c62a01-588f-4ef5-b1c1-09f232b03718", 00:11:40.811 "is_configured": true, 00:11:40.811 "data_offset": 0, 00:11:40.811 "data_size": 65536 00:11:40.811 } 00:11:40.811 ] 00:11:40.811 }' 00:11:40.811 09:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.811 09:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.378 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.378 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:41.378 09:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.378 09:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.378 09:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.378 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:41.378 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:41.378 09:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.378 09:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.378 [2024-10-15 09:10:59.094957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:41.378 09:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.378 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:41.378 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.378 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.378 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.378 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.378 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.378 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.378 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.378 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.378 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.378 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.378 09:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.378 09:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.378 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.378 09:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.378 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.378 "name": "Existed_Raid", 00:11:41.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.378 "strip_size_kb": 0, 00:11:41.378 "state": "configuring", 00:11:41.378 "raid_level": "raid1", 00:11:41.378 "superblock": false, 00:11:41.378 "num_base_bdevs": 4, 00:11:41.378 "num_base_bdevs_discovered": 3, 00:11:41.378 "num_base_bdevs_operational": 4, 00:11:41.378 "base_bdevs_list": [ 00:11:41.378 { 00:11:41.378 "name": "BaseBdev1", 00:11:41.378 "uuid": "dc9070f6-683c-46db-ad1a-ae5546fb29fc", 00:11:41.378 "is_configured": true, 00:11:41.378 "data_offset": 0, 00:11:41.378 "data_size": 65536 00:11:41.378 }, 00:11:41.378 { 00:11:41.378 "name": null, 00:11:41.378 "uuid": "195b0fd9-80eb-4bca-b68b-df99a0c86362", 00:11:41.378 "is_configured": false, 00:11:41.378 "data_offset": 0, 00:11:41.378 "data_size": 65536 00:11:41.378 }, 00:11:41.378 { 00:11:41.378 "name": "BaseBdev3", 00:11:41.378 "uuid": "ffda5b2a-3779-45d6-aa08-a093a194e059", 00:11:41.378 "is_configured": true, 00:11:41.378 "data_offset": 0, 00:11:41.378 "data_size": 65536 00:11:41.378 }, 00:11:41.378 { 00:11:41.378 "name": "BaseBdev4", 00:11:41.378 "uuid": "96c62a01-588f-4ef5-b1c1-09f232b03718", 00:11:41.378 "is_configured": true, 00:11:41.378 "data_offset": 0, 00:11:41.378 "data_size": 65536 00:11:41.378 } 00:11:41.378 ] 00:11:41.378 }' 00:11:41.378 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.378 09:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.946 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.946 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:41.946 09:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.946 09:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.946 09:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.946 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:41.946 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:41.946 09:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.946 09:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.946 [2024-10-15 09:10:59.594155] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:41.946 09:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.946 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:41.946 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.946 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.946 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.946 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.946 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.946 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.946 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.946 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.946 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.946 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.946 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.946 09:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.946 09:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.946 09:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.946 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.946 "name": "Existed_Raid", 00:11:41.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.946 "strip_size_kb": 0, 00:11:41.946 "state": "configuring", 00:11:41.946 "raid_level": "raid1", 00:11:41.946 "superblock": false, 00:11:41.946 "num_base_bdevs": 4, 00:11:41.946 "num_base_bdevs_discovered": 2, 00:11:41.946 "num_base_bdevs_operational": 4, 00:11:41.946 "base_bdevs_list": [ 00:11:41.946 { 00:11:41.946 "name": null, 00:11:41.946 "uuid": "dc9070f6-683c-46db-ad1a-ae5546fb29fc", 00:11:41.946 "is_configured": false, 00:11:41.946 "data_offset": 0, 00:11:41.946 "data_size": 65536 00:11:41.946 }, 00:11:41.946 { 00:11:41.946 "name": null, 00:11:41.946 "uuid": "195b0fd9-80eb-4bca-b68b-df99a0c86362", 00:11:41.946 "is_configured": false, 00:11:41.946 "data_offset": 0, 00:11:41.946 "data_size": 65536 00:11:41.946 }, 00:11:41.946 { 00:11:41.947 "name": "BaseBdev3", 00:11:41.947 "uuid": "ffda5b2a-3779-45d6-aa08-a093a194e059", 00:11:41.947 "is_configured": true, 00:11:41.947 "data_offset": 0, 00:11:41.947 "data_size": 65536 00:11:41.947 }, 00:11:41.947 { 00:11:41.947 "name": "BaseBdev4", 00:11:41.947 "uuid": "96c62a01-588f-4ef5-b1c1-09f232b03718", 00:11:41.947 "is_configured": true, 00:11:41.947 "data_offset": 0, 00:11:41.947 "data_size": 65536 00:11:41.947 } 00:11:41.947 ] 00:11:41.947 }' 00:11:41.947 09:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.947 09:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.514 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.514 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:42.514 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.514 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.514 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.514 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:42.514 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:42.514 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.514 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.514 [2024-10-15 09:11:00.166252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:42.514 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.514 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:42.514 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.514 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.514 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.514 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.514 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.514 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.514 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.514 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.514 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.514 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.514 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.514 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.514 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.514 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.514 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.514 "name": "Existed_Raid", 00:11:42.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.514 "strip_size_kb": 0, 00:11:42.514 "state": "configuring", 00:11:42.514 "raid_level": "raid1", 00:11:42.514 "superblock": false, 00:11:42.514 "num_base_bdevs": 4, 00:11:42.514 "num_base_bdevs_discovered": 3, 00:11:42.514 "num_base_bdevs_operational": 4, 00:11:42.514 "base_bdevs_list": [ 00:11:42.514 { 00:11:42.514 "name": null, 00:11:42.514 "uuid": "dc9070f6-683c-46db-ad1a-ae5546fb29fc", 00:11:42.514 "is_configured": false, 00:11:42.514 "data_offset": 0, 00:11:42.514 "data_size": 65536 00:11:42.514 }, 00:11:42.514 { 00:11:42.514 "name": "BaseBdev2", 00:11:42.514 "uuid": "195b0fd9-80eb-4bca-b68b-df99a0c86362", 00:11:42.514 "is_configured": true, 00:11:42.514 "data_offset": 0, 00:11:42.514 "data_size": 65536 00:11:42.514 }, 00:11:42.514 { 00:11:42.514 "name": "BaseBdev3", 00:11:42.514 "uuid": "ffda5b2a-3779-45d6-aa08-a093a194e059", 00:11:42.514 "is_configured": true, 00:11:42.514 "data_offset": 0, 00:11:42.514 "data_size": 65536 00:11:42.514 }, 00:11:42.514 { 00:11:42.514 "name": "BaseBdev4", 00:11:42.514 "uuid": "96c62a01-588f-4ef5-b1c1-09f232b03718", 00:11:42.514 "is_configured": true, 00:11:42.514 "data_offset": 0, 00:11:42.514 "data_size": 65536 00:11:42.514 } 00:11:42.514 ] 00:11:42.514 }' 00:11:42.514 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.514 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.773 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.773 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:42.773 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.773 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.773 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.032 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:43.032 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.032 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:43.032 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.032 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.032 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.032 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u dc9070f6-683c-46db-ad1a-ae5546fb29fc 00:11:43.032 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.032 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.032 [2024-10-15 09:11:00.784884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:43.032 [2024-10-15 09:11:00.785000] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:43.032 [2024-10-15 09:11:00.785028] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:43.032 [2024-10-15 09:11:00.785372] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:43.032 [2024-10-15 09:11:00.785602] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:43.033 [2024-10-15 09:11:00.785650] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:43.033 [2024-10-15 09:11:00.786003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.033 NewBaseBdev 00:11:43.033 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.033 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:43.033 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:43.033 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:43.033 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:43.033 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:43.033 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:43.033 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:43.033 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.033 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.033 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.033 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:43.033 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.033 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.033 [ 00:11:43.033 { 00:11:43.033 "name": "NewBaseBdev", 00:11:43.033 "aliases": [ 00:11:43.033 "dc9070f6-683c-46db-ad1a-ae5546fb29fc" 00:11:43.033 ], 00:11:43.033 "product_name": "Malloc disk", 00:11:43.033 "block_size": 512, 00:11:43.033 "num_blocks": 65536, 00:11:43.033 "uuid": "dc9070f6-683c-46db-ad1a-ae5546fb29fc", 00:11:43.033 "assigned_rate_limits": { 00:11:43.033 "rw_ios_per_sec": 0, 00:11:43.033 "rw_mbytes_per_sec": 0, 00:11:43.033 "r_mbytes_per_sec": 0, 00:11:43.033 "w_mbytes_per_sec": 0 00:11:43.033 }, 00:11:43.033 "claimed": true, 00:11:43.033 "claim_type": "exclusive_write", 00:11:43.033 "zoned": false, 00:11:43.033 "supported_io_types": { 00:11:43.033 "read": true, 00:11:43.033 "write": true, 00:11:43.033 "unmap": true, 00:11:43.033 "flush": true, 00:11:43.033 "reset": true, 00:11:43.033 "nvme_admin": false, 00:11:43.033 "nvme_io": false, 00:11:43.033 "nvme_io_md": false, 00:11:43.033 "write_zeroes": true, 00:11:43.033 "zcopy": true, 00:11:43.033 "get_zone_info": false, 00:11:43.033 "zone_management": false, 00:11:43.033 "zone_append": false, 00:11:43.033 "compare": false, 00:11:43.033 "compare_and_write": false, 00:11:43.033 "abort": true, 00:11:43.033 "seek_hole": false, 00:11:43.033 "seek_data": false, 00:11:43.033 "copy": true, 00:11:43.033 "nvme_iov_md": false 00:11:43.033 }, 00:11:43.033 "memory_domains": [ 00:11:43.033 { 00:11:43.033 "dma_device_id": "system", 00:11:43.033 "dma_device_type": 1 00:11:43.033 }, 00:11:43.033 { 00:11:43.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.033 "dma_device_type": 2 00:11:43.033 } 00:11:43.033 ], 00:11:43.033 "driver_specific": {} 00:11:43.033 } 00:11:43.033 ] 00:11:43.033 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.033 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:43.033 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:43.033 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.033 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.033 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.033 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.033 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.033 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.033 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.033 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.033 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.033 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.033 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.033 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.033 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.033 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.033 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.033 "name": "Existed_Raid", 00:11:43.033 "uuid": "58d0620e-7d6e-4eca-9d5b-83308e4a8863", 00:11:43.033 "strip_size_kb": 0, 00:11:43.033 "state": "online", 00:11:43.033 "raid_level": "raid1", 00:11:43.033 "superblock": false, 00:11:43.033 "num_base_bdevs": 4, 00:11:43.033 "num_base_bdevs_discovered": 4, 00:11:43.033 "num_base_bdevs_operational": 4, 00:11:43.033 "base_bdevs_list": [ 00:11:43.033 { 00:11:43.033 "name": "NewBaseBdev", 00:11:43.033 "uuid": "dc9070f6-683c-46db-ad1a-ae5546fb29fc", 00:11:43.033 "is_configured": true, 00:11:43.033 "data_offset": 0, 00:11:43.033 "data_size": 65536 00:11:43.033 }, 00:11:43.033 { 00:11:43.033 "name": "BaseBdev2", 00:11:43.033 "uuid": "195b0fd9-80eb-4bca-b68b-df99a0c86362", 00:11:43.033 "is_configured": true, 00:11:43.033 "data_offset": 0, 00:11:43.033 "data_size": 65536 00:11:43.033 }, 00:11:43.033 { 00:11:43.033 "name": "BaseBdev3", 00:11:43.033 "uuid": "ffda5b2a-3779-45d6-aa08-a093a194e059", 00:11:43.033 "is_configured": true, 00:11:43.033 "data_offset": 0, 00:11:43.033 "data_size": 65536 00:11:43.033 }, 00:11:43.033 { 00:11:43.033 "name": "BaseBdev4", 00:11:43.033 "uuid": "96c62a01-588f-4ef5-b1c1-09f232b03718", 00:11:43.033 "is_configured": true, 00:11:43.033 "data_offset": 0, 00:11:43.033 "data_size": 65536 00:11:43.033 } 00:11:43.033 ] 00:11:43.033 }' 00:11:43.033 09:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.033 09:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.603 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:43.603 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:43.603 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:43.603 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:43.603 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:43.603 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:43.603 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:43.603 09:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.603 09:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.603 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:43.603 [2024-10-15 09:11:01.256580] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:43.603 09:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.603 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:43.603 "name": "Existed_Raid", 00:11:43.603 "aliases": [ 00:11:43.603 "58d0620e-7d6e-4eca-9d5b-83308e4a8863" 00:11:43.603 ], 00:11:43.603 "product_name": "Raid Volume", 00:11:43.603 "block_size": 512, 00:11:43.603 "num_blocks": 65536, 00:11:43.603 "uuid": "58d0620e-7d6e-4eca-9d5b-83308e4a8863", 00:11:43.603 "assigned_rate_limits": { 00:11:43.603 "rw_ios_per_sec": 0, 00:11:43.603 "rw_mbytes_per_sec": 0, 00:11:43.603 "r_mbytes_per_sec": 0, 00:11:43.603 "w_mbytes_per_sec": 0 00:11:43.603 }, 00:11:43.603 "claimed": false, 00:11:43.603 "zoned": false, 00:11:43.603 "supported_io_types": { 00:11:43.603 "read": true, 00:11:43.603 "write": true, 00:11:43.603 "unmap": false, 00:11:43.603 "flush": false, 00:11:43.603 "reset": true, 00:11:43.603 "nvme_admin": false, 00:11:43.603 "nvme_io": false, 00:11:43.603 "nvme_io_md": false, 00:11:43.603 "write_zeroes": true, 00:11:43.603 "zcopy": false, 00:11:43.603 "get_zone_info": false, 00:11:43.603 "zone_management": false, 00:11:43.603 "zone_append": false, 00:11:43.603 "compare": false, 00:11:43.603 "compare_and_write": false, 00:11:43.603 "abort": false, 00:11:43.603 "seek_hole": false, 00:11:43.603 "seek_data": false, 00:11:43.603 "copy": false, 00:11:43.603 "nvme_iov_md": false 00:11:43.603 }, 00:11:43.603 "memory_domains": [ 00:11:43.603 { 00:11:43.603 "dma_device_id": "system", 00:11:43.603 "dma_device_type": 1 00:11:43.603 }, 00:11:43.603 { 00:11:43.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.603 "dma_device_type": 2 00:11:43.603 }, 00:11:43.603 { 00:11:43.603 "dma_device_id": "system", 00:11:43.603 "dma_device_type": 1 00:11:43.603 }, 00:11:43.603 { 00:11:43.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.603 "dma_device_type": 2 00:11:43.603 }, 00:11:43.603 { 00:11:43.603 "dma_device_id": "system", 00:11:43.603 "dma_device_type": 1 00:11:43.603 }, 00:11:43.603 { 00:11:43.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.603 "dma_device_type": 2 00:11:43.603 }, 00:11:43.603 { 00:11:43.603 "dma_device_id": "system", 00:11:43.603 "dma_device_type": 1 00:11:43.603 }, 00:11:43.603 { 00:11:43.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.603 "dma_device_type": 2 00:11:43.603 } 00:11:43.603 ], 00:11:43.603 "driver_specific": { 00:11:43.603 "raid": { 00:11:43.603 "uuid": "58d0620e-7d6e-4eca-9d5b-83308e4a8863", 00:11:43.603 "strip_size_kb": 0, 00:11:43.603 "state": "online", 00:11:43.603 "raid_level": "raid1", 00:11:43.603 "superblock": false, 00:11:43.603 "num_base_bdevs": 4, 00:11:43.603 "num_base_bdevs_discovered": 4, 00:11:43.603 "num_base_bdevs_operational": 4, 00:11:43.603 "base_bdevs_list": [ 00:11:43.603 { 00:11:43.603 "name": "NewBaseBdev", 00:11:43.603 "uuid": "dc9070f6-683c-46db-ad1a-ae5546fb29fc", 00:11:43.603 "is_configured": true, 00:11:43.603 "data_offset": 0, 00:11:43.603 "data_size": 65536 00:11:43.603 }, 00:11:43.603 { 00:11:43.603 "name": "BaseBdev2", 00:11:43.603 "uuid": "195b0fd9-80eb-4bca-b68b-df99a0c86362", 00:11:43.603 "is_configured": true, 00:11:43.603 "data_offset": 0, 00:11:43.603 "data_size": 65536 00:11:43.603 }, 00:11:43.603 { 00:11:43.603 "name": "BaseBdev3", 00:11:43.603 "uuid": "ffda5b2a-3779-45d6-aa08-a093a194e059", 00:11:43.603 "is_configured": true, 00:11:43.603 "data_offset": 0, 00:11:43.603 "data_size": 65536 00:11:43.603 }, 00:11:43.603 { 00:11:43.603 "name": "BaseBdev4", 00:11:43.603 "uuid": "96c62a01-588f-4ef5-b1c1-09f232b03718", 00:11:43.603 "is_configured": true, 00:11:43.603 "data_offset": 0, 00:11:43.603 "data_size": 65536 00:11:43.603 } 00:11:43.603 ] 00:11:43.603 } 00:11:43.603 } 00:11:43.603 }' 00:11:43.603 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:43.603 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:43.603 BaseBdev2 00:11:43.603 BaseBdev3 00:11:43.603 BaseBdev4' 00:11:43.603 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.603 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:43.603 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.603 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.603 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:43.603 09:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.603 09:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.603 09:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.603 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.603 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.603 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.603 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:43.603 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.603 09:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.603 09:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.604 09:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.604 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.863 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.863 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.863 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.863 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:43.863 09:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.863 09:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.863 09:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.863 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.863 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.863 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.863 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:43.863 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.863 09:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.863 09:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.863 09:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.863 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.863 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.863 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:43.863 09:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.863 09:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.863 [2024-10-15 09:11:01.607698] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:43.863 [2024-10-15 09:11:01.607831] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:43.863 [2024-10-15 09:11:01.607979] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:43.863 [2024-10-15 09:11:01.608331] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:43.863 [2024-10-15 09:11:01.608396] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:43.863 09:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.863 09:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73293 00:11:43.863 09:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 73293 ']' 00:11:43.863 09:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 73293 00:11:43.863 09:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:43.863 09:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:43.863 09:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73293 00:11:43.863 09:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:43.863 09:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:43.863 09:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73293' 00:11:43.863 killing process with pid 73293 00:11:43.863 09:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 73293 00:11:43.863 [2024-10-15 09:11:01.655011] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:43.863 09:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 73293 00:11:44.443 [2024-10-15 09:11:02.058914] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:45.404 09:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:45.404 00:11:45.404 real 0m11.882s 00:11:45.404 user 0m18.872s 00:11:45.404 sys 0m2.167s 00:11:45.404 ************************************ 00:11:45.404 END TEST raid_state_function_test 00:11:45.404 ************************************ 00:11:45.404 09:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:45.404 09:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.404 09:11:03 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:45.404 09:11:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:45.404 09:11:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:45.404 09:11:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:45.404 ************************************ 00:11:45.404 START TEST raid_state_function_test_sb 00:11:45.404 ************************************ 00:11:45.404 09:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:11:45.404 09:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:45.404 09:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:45.404 09:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:45.404 09:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:45.404 09:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:45.404 09:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:45.404 09:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:45.404 09:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:45.404 09:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:45.404 09:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:45.405 09:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:45.405 09:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:45.405 09:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:45.405 09:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:45.405 09:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:45.405 09:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:45.405 09:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:45.405 09:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:45.405 09:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:45.405 09:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:45.405 09:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:45.405 09:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:45.405 09:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:45.405 09:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:45.405 09:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:45.405 09:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:45.405 09:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:45.405 09:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:45.405 Process raid pid: 73967 00:11:45.405 09:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73967 00:11:45.405 09:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:45.405 09:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73967' 00:11:45.405 09:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73967 00:11:45.405 09:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 73967 ']' 00:11:45.405 09:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.405 09:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:45.405 09:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.405 09:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:45.405 09:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.663 [2024-10-15 09:11:03.382368] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:11:45.663 [2024-10-15 09:11:03.382581] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:45.663 [2024-10-15 09:11:03.545955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.921 [2024-10-15 09:11:03.677292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.180 [2024-10-15 09:11:03.892971] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:46.180 [2024-10-15 09:11:03.893114] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:46.438 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:46.438 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:46.438 09:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:46.438 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.438 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.438 [2024-10-15 09:11:04.233905] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:46.438 [2024-10-15 09:11:04.234066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:46.438 [2024-10-15 09:11:04.234099] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:46.438 [2024-10-15 09:11:04.234124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:46.438 [2024-10-15 09:11:04.234144] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:46.438 [2024-10-15 09:11:04.234165] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:46.438 [2024-10-15 09:11:04.234183] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:46.438 [2024-10-15 09:11:04.234245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:46.438 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.438 09:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:46.438 09:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.438 09:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.438 09:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.438 09:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.438 09:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.438 09:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.438 09:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.438 09:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.438 09:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.438 09:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.438 09:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.438 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.438 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.438 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.438 09:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.438 "name": "Existed_Raid", 00:11:46.438 "uuid": "909f9209-1cae-40d2-b4d8-b5309490ef57", 00:11:46.438 "strip_size_kb": 0, 00:11:46.438 "state": "configuring", 00:11:46.438 "raid_level": "raid1", 00:11:46.438 "superblock": true, 00:11:46.438 "num_base_bdevs": 4, 00:11:46.438 "num_base_bdevs_discovered": 0, 00:11:46.438 "num_base_bdevs_operational": 4, 00:11:46.438 "base_bdevs_list": [ 00:11:46.438 { 00:11:46.438 "name": "BaseBdev1", 00:11:46.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.438 "is_configured": false, 00:11:46.438 "data_offset": 0, 00:11:46.438 "data_size": 0 00:11:46.438 }, 00:11:46.438 { 00:11:46.438 "name": "BaseBdev2", 00:11:46.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.438 "is_configured": false, 00:11:46.438 "data_offset": 0, 00:11:46.438 "data_size": 0 00:11:46.438 }, 00:11:46.438 { 00:11:46.438 "name": "BaseBdev3", 00:11:46.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.438 "is_configured": false, 00:11:46.438 "data_offset": 0, 00:11:46.438 "data_size": 0 00:11:46.438 }, 00:11:46.438 { 00:11:46.438 "name": "BaseBdev4", 00:11:46.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.438 "is_configured": false, 00:11:46.438 "data_offset": 0, 00:11:46.438 "data_size": 0 00:11:46.438 } 00:11:46.438 ] 00:11:46.438 }' 00:11:46.438 09:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.438 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.006 09:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:47.006 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.006 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.006 [2024-10-15 09:11:04.705007] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:47.006 [2024-10-15 09:11:04.705111] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:47.006 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.006 09:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:47.006 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.006 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.006 [2024-10-15 09:11:04.717003] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:47.006 [2024-10-15 09:11:04.717099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:47.007 [2024-10-15 09:11:04.717126] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:47.007 [2024-10-15 09:11:04.717149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:47.007 [2024-10-15 09:11:04.717168] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:47.007 [2024-10-15 09:11:04.717189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:47.007 [2024-10-15 09:11:04.717207] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:47.007 [2024-10-15 09:11:04.717228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.007 [2024-10-15 09:11:04.763265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:47.007 BaseBdev1 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.007 [ 00:11:47.007 { 00:11:47.007 "name": "BaseBdev1", 00:11:47.007 "aliases": [ 00:11:47.007 "00a9da6d-7af1-4595-aff7-a2ef852f7a85" 00:11:47.007 ], 00:11:47.007 "product_name": "Malloc disk", 00:11:47.007 "block_size": 512, 00:11:47.007 "num_blocks": 65536, 00:11:47.007 "uuid": "00a9da6d-7af1-4595-aff7-a2ef852f7a85", 00:11:47.007 "assigned_rate_limits": { 00:11:47.007 "rw_ios_per_sec": 0, 00:11:47.007 "rw_mbytes_per_sec": 0, 00:11:47.007 "r_mbytes_per_sec": 0, 00:11:47.007 "w_mbytes_per_sec": 0 00:11:47.007 }, 00:11:47.007 "claimed": true, 00:11:47.007 "claim_type": "exclusive_write", 00:11:47.007 "zoned": false, 00:11:47.007 "supported_io_types": { 00:11:47.007 "read": true, 00:11:47.007 "write": true, 00:11:47.007 "unmap": true, 00:11:47.007 "flush": true, 00:11:47.007 "reset": true, 00:11:47.007 "nvme_admin": false, 00:11:47.007 "nvme_io": false, 00:11:47.007 "nvme_io_md": false, 00:11:47.007 "write_zeroes": true, 00:11:47.007 "zcopy": true, 00:11:47.007 "get_zone_info": false, 00:11:47.007 "zone_management": false, 00:11:47.007 "zone_append": false, 00:11:47.007 "compare": false, 00:11:47.007 "compare_and_write": false, 00:11:47.007 "abort": true, 00:11:47.007 "seek_hole": false, 00:11:47.007 "seek_data": false, 00:11:47.007 "copy": true, 00:11:47.007 "nvme_iov_md": false 00:11:47.007 }, 00:11:47.007 "memory_domains": [ 00:11:47.007 { 00:11:47.007 "dma_device_id": "system", 00:11:47.007 "dma_device_type": 1 00:11:47.007 }, 00:11:47.007 { 00:11:47.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.007 "dma_device_type": 2 00:11:47.007 } 00:11:47.007 ], 00:11:47.007 "driver_specific": {} 00:11:47.007 } 00:11:47.007 ] 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.007 "name": "Existed_Raid", 00:11:47.007 "uuid": "5f04f68e-5462-4a88-bc58-1374f9f34498", 00:11:47.007 "strip_size_kb": 0, 00:11:47.007 "state": "configuring", 00:11:47.007 "raid_level": "raid1", 00:11:47.007 "superblock": true, 00:11:47.007 "num_base_bdevs": 4, 00:11:47.007 "num_base_bdevs_discovered": 1, 00:11:47.007 "num_base_bdevs_operational": 4, 00:11:47.007 "base_bdevs_list": [ 00:11:47.007 { 00:11:47.007 "name": "BaseBdev1", 00:11:47.007 "uuid": "00a9da6d-7af1-4595-aff7-a2ef852f7a85", 00:11:47.007 "is_configured": true, 00:11:47.007 "data_offset": 2048, 00:11:47.007 "data_size": 63488 00:11:47.007 }, 00:11:47.007 { 00:11:47.007 "name": "BaseBdev2", 00:11:47.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.007 "is_configured": false, 00:11:47.007 "data_offset": 0, 00:11:47.007 "data_size": 0 00:11:47.007 }, 00:11:47.007 { 00:11:47.007 "name": "BaseBdev3", 00:11:47.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.007 "is_configured": false, 00:11:47.007 "data_offset": 0, 00:11:47.007 "data_size": 0 00:11:47.007 }, 00:11:47.007 { 00:11:47.007 "name": "BaseBdev4", 00:11:47.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.007 "is_configured": false, 00:11:47.007 "data_offset": 0, 00:11:47.007 "data_size": 0 00:11:47.007 } 00:11:47.007 ] 00:11:47.007 }' 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.007 09:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.604 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:47.604 09:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.604 09:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.604 [2024-10-15 09:11:05.258471] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:47.604 [2024-10-15 09:11:05.258536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:47.604 09:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.604 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:47.604 09:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.604 09:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.604 [2024-10-15 09:11:05.270565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:47.604 [2024-10-15 09:11:05.272440] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:47.604 [2024-10-15 09:11:05.272495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:47.604 [2024-10-15 09:11:05.272504] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:47.604 [2024-10-15 09:11:05.272515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:47.604 [2024-10-15 09:11:05.272521] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:47.604 [2024-10-15 09:11:05.272530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:47.604 09:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.604 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:47.604 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:47.604 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:47.604 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.604 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.604 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.604 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.604 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.604 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.604 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.604 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.604 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.604 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.604 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.604 09:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.604 09:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.604 09:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.604 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.604 "name": "Existed_Raid", 00:11:47.604 "uuid": "d600d8ae-4ba5-4b2b-9b45-aa29aa14726e", 00:11:47.604 "strip_size_kb": 0, 00:11:47.604 "state": "configuring", 00:11:47.604 "raid_level": "raid1", 00:11:47.604 "superblock": true, 00:11:47.604 "num_base_bdevs": 4, 00:11:47.604 "num_base_bdevs_discovered": 1, 00:11:47.604 "num_base_bdevs_operational": 4, 00:11:47.604 "base_bdevs_list": [ 00:11:47.604 { 00:11:47.604 "name": "BaseBdev1", 00:11:47.604 "uuid": "00a9da6d-7af1-4595-aff7-a2ef852f7a85", 00:11:47.604 "is_configured": true, 00:11:47.604 "data_offset": 2048, 00:11:47.604 "data_size": 63488 00:11:47.604 }, 00:11:47.604 { 00:11:47.604 "name": "BaseBdev2", 00:11:47.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.604 "is_configured": false, 00:11:47.604 "data_offset": 0, 00:11:47.604 "data_size": 0 00:11:47.604 }, 00:11:47.604 { 00:11:47.604 "name": "BaseBdev3", 00:11:47.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.604 "is_configured": false, 00:11:47.604 "data_offset": 0, 00:11:47.604 "data_size": 0 00:11:47.604 }, 00:11:47.604 { 00:11:47.604 "name": "BaseBdev4", 00:11:47.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.604 "is_configured": false, 00:11:47.604 "data_offset": 0, 00:11:47.604 "data_size": 0 00:11:47.604 } 00:11:47.604 ] 00:11:47.604 }' 00:11:47.604 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.604 09:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.863 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:47.863 09:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.863 09:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.122 [2024-10-15 09:11:05.785660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:48.122 BaseBdev2 00:11:48.122 09:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.122 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:48.122 09:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:48.122 09:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:48.122 09:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:48.122 09:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:48.122 09:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:48.122 09:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:48.122 09:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.122 09:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.122 09:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.122 09:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:48.122 09:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.122 09:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.122 [ 00:11:48.122 { 00:11:48.122 "name": "BaseBdev2", 00:11:48.122 "aliases": [ 00:11:48.122 "6c18d838-f26f-44e0-ae52-ca17db81cf1a" 00:11:48.122 ], 00:11:48.122 "product_name": "Malloc disk", 00:11:48.122 "block_size": 512, 00:11:48.122 "num_blocks": 65536, 00:11:48.122 "uuid": "6c18d838-f26f-44e0-ae52-ca17db81cf1a", 00:11:48.122 "assigned_rate_limits": { 00:11:48.122 "rw_ios_per_sec": 0, 00:11:48.122 "rw_mbytes_per_sec": 0, 00:11:48.122 "r_mbytes_per_sec": 0, 00:11:48.122 "w_mbytes_per_sec": 0 00:11:48.122 }, 00:11:48.122 "claimed": true, 00:11:48.122 "claim_type": "exclusive_write", 00:11:48.122 "zoned": false, 00:11:48.122 "supported_io_types": { 00:11:48.122 "read": true, 00:11:48.122 "write": true, 00:11:48.122 "unmap": true, 00:11:48.122 "flush": true, 00:11:48.122 "reset": true, 00:11:48.122 "nvme_admin": false, 00:11:48.122 "nvme_io": false, 00:11:48.122 "nvme_io_md": false, 00:11:48.122 "write_zeroes": true, 00:11:48.122 "zcopy": true, 00:11:48.122 "get_zone_info": false, 00:11:48.122 "zone_management": false, 00:11:48.122 "zone_append": false, 00:11:48.122 "compare": false, 00:11:48.122 "compare_and_write": false, 00:11:48.122 "abort": true, 00:11:48.122 "seek_hole": false, 00:11:48.122 "seek_data": false, 00:11:48.122 "copy": true, 00:11:48.122 "nvme_iov_md": false 00:11:48.122 }, 00:11:48.122 "memory_domains": [ 00:11:48.123 { 00:11:48.123 "dma_device_id": "system", 00:11:48.123 "dma_device_type": 1 00:11:48.123 }, 00:11:48.123 { 00:11:48.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.123 "dma_device_type": 2 00:11:48.123 } 00:11:48.123 ], 00:11:48.123 "driver_specific": {} 00:11:48.123 } 00:11:48.123 ] 00:11:48.123 09:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.123 09:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:48.123 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:48.123 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:48.123 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:48.123 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.123 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.123 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.123 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.123 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.123 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.123 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.123 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.123 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.123 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.123 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.123 09:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.123 09:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.123 09:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.123 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.123 "name": "Existed_Raid", 00:11:48.123 "uuid": "d600d8ae-4ba5-4b2b-9b45-aa29aa14726e", 00:11:48.123 "strip_size_kb": 0, 00:11:48.123 "state": "configuring", 00:11:48.123 "raid_level": "raid1", 00:11:48.123 "superblock": true, 00:11:48.123 "num_base_bdevs": 4, 00:11:48.123 "num_base_bdevs_discovered": 2, 00:11:48.123 "num_base_bdevs_operational": 4, 00:11:48.123 "base_bdevs_list": [ 00:11:48.123 { 00:11:48.123 "name": "BaseBdev1", 00:11:48.123 "uuid": "00a9da6d-7af1-4595-aff7-a2ef852f7a85", 00:11:48.123 "is_configured": true, 00:11:48.123 "data_offset": 2048, 00:11:48.123 "data_size": 63488 00:11:48.123 }, 00:11:48.123 { 00:11:48.123 "name": "BaseBdev2", 00:11:48.123 "uuid": "6c18d838-f26f-44e0-ae52-ca17db81cf1a", 00:11:48.123 "is_configured": true, 00:11:48.123 "data_offset": 2048, 00:11:48.123 "data_size": 63488 00:11:48.123 }, 00:11:48.123 { 00:11:48.123 "name": "BaseBdev3", 00:11:48.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.123 "is_configured": false, 00:11:48.123 "data_offset": 0, 00:11:48.123 "data_size": 0 00:11:48.123 }, 00:11:48.123 { 00:11:48.123 "name": "BaseBdev4", 00:11:48.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.123 "is_configured": false, 00:11:48.123 "data_offset": 0, 00:11:48.123 "data_size": 0 00:11:48.123 } 00:11:48.123 ] 00:11:48.123 }' 00:11:48.123 09:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.123 09:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.691 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:48.691 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.691 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.691 [2024-10-15 09:11:06.353826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:48.691 BaseBdev3 00:11:48.691 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.691 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:48.691 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:48.691 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:48.691 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:48.691 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:48.691 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:48.691 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:48.691 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.691 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.691 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.691 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:48.691 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.691 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.691 [ 00:11:48.691 { 00:11:48.691 "name": "BaseBdev3", 00:11:48.691 "aliases": [ 00:11:48.691 "2dd08393-b6eb-42b0-bcab-0eba09438a6f" 00:11:48.692 ], 00:11:48.692 "product_name": "Malloc disk", 00:11:48.692 "block_size": 512, 00:11:48.692 "num_blocks": 65536, 00:11:48.692 "uuid": "2dd08393-b6eb-42b0-bcab-0eba09438a6f", 00:11:48.692 "assigned_rate_limits": { 00:11:48.692 "rw_ios_per_sec": 0, 00:11:48.692 "rw_mbytes_per_sec": 0, 00:11:48.692 "r_mbytes_per_sec": 0, 00:11:48.692 "w_mbytes_per_sec": 0 00:11:48.692 }, 00:11:48.692 "claimed": true, 00:11:48.692 "claim_type": "exclusive_write", 00:11:48.692 "zoned": false, 00:11:48.692 "supported_io_types": { 00:11:48.692 "read": true, 00:11:48.692 "write": true, 00:11:48.692 "unmap": true, 00:11:48.692 "flush": true, 00:11:48.692 "reset": true, 00:11:48.692 "nvme_admin": false, 00:11:48.692 "nvme_io": false, 00:11:48.692 "nvme_io_md": false, 00:11:48.692 "write_zeroes": true, 00:11:48.692 "zcopy": true, 00:11:48.692 "get_zone_info": false, 00:11:48.692 "zone_management": false, 00:11:48.692 "zone_append": false, 00:11:48.692 "compare": false, 00:11:48.692 "compare_and_write": false, 00:11:48.692 "abort": true, 00:11:48.692 "seek_hole": false, 00:11:48.692 "seek_data": false, 00:11:48.692 "copy": true, 00:11:48.692 "nvme_iov_md": false 00:11:48.692 }, 00:11:48.692 "memory_domains": [ 00:11:48.692 { 00:11:48.692 "dma_device_id": "system", 00:11:48.692 "dma_device_type": 1 00:11:48.692 }, 00:11:48.692 { 00:11:48.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.692 "dma_device_type": 2 00:11:48.692 } 00:11:48.692 ], 00:11:48.692 "driver_specific": {} 00:11:48.692 } 00:11:48.692 ] 00:11:48.692 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.692 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:48.692 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:48.692 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:48.692 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:48.692 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.692 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.692 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.692 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.692 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.692 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.692 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.692 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.692 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.692 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.692 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.692 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.692 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.692 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.692 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.692 "name": "Existed_Raid", 00:11:48.692 "uuid": "d600d8ae-4ba5-4b2b-9b45-aa29aa14726e", 00:11:48.692 "strip_size_kb": 0, 00:11:48.692 "state": "configuring", 00:11:48.692 "raid_level": "raid1", 00:11:48.692 "superblock": true, 00:11:48.692 "num_base_bdevs": 4, 00:11:48.692 "num_base_bdevs_discovered": 3, 00:11:48.692 "num_base_bdevs_operational": 4, 00:11:48.692 "base_bdevs_list": [ 00:11:48.692 { 00:11:48.692 "name": "BaseBdev1", 00:11:48.692 "uuid": "00a9da6d-7af1-4595-aff7-a2ef852f7a85", 00:11:48.692 "is_configured": true, 00:11:48.692 "data_offset": 2048, 00:11:48.692 "data_size": 63488 00:11:48.692 }, 00:11:48.692 { 00:11:48.692 "name": "BaseBdev2", 00:11:48.692 "uuid": "6c18d838-f26f-44e0-ae52-ca17db81cf1a", 00:11:48.692 "is_configured": true, 00:11:48.692 "data_offset": 2048, 00:11:48.692 "data_size": 63488 00:11:48.692 }, 00:11:48.692 { 00:11:48.692 "name": "BaseBdev3", 00:11:48.692 "uuid": "2dd08393-b6eb-42b0-bcab-0eba09438a6f", 00:11:48.692 "is_configured": true, 00:11:48.692 "data_offset": 2048, 00:11:48.692 "data_size": 63488 00:11:48.692 }, 00:11:48.692 { 00:11:48.692 "name": "BaseBdev4", 00:11:48.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.692 "is_configured": false, 00:11:48.692 "data_offset": 0, 00:11:48.692 "data_size": 0 00:11:48.692 } 00:11:48.692 ] 00:11:48.692 }' 00:11:48.692 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.692 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.951 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:48.951 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.951 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.209 [2024-10-15 09:11:06.850514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:49.210 [2024-10-15 09:11:06.851033] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:49.210 [2024-10-15 09:11:06.851057] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:49.210 BaseBdev4 00:11:49.210 [2024-10-15 09:11:06.851375] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:49.210 [2024-10-15 09:11:06.851562] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:49.210 [2024-10-15 09:11:06.851579] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:49.210 [2024-10-15 09:11:06.851778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.210 [ 00:11:49.210 { 00:11:49.210 "name": "BaseBdev4", 00:11:49.210 "aliases": [ 00:11:49.210 "62edee0f-dce7-400d-a188-6e135c48756a" 00:11:49.210 ], 00:11:49.210 "product_name": "Malloc disk", 00:11:49.210 "block_size": 512, 00:11:49.210 "num_blocks": 65536, 00:11:49.210 "uuid": "62edee0f-dce7-400d-a188-6e135c48756a", 00:11:49.210 "assigned_rate_limits": { 00:11:49.210 "rw_ios_per_sec": 0, 00:11:49.210 "rw_mbytes_per_sec": 0, 00:11:49.210 "r_mbytes_per_sec": 0, 00:11:49.210 "w_mbytes_per_sec": 0 00:11:49.210 }, 00:11:49.210 "claimed": true, 00:11:49.210 "claim_type": "exclusive_write", 00:11:49.210 "zoned": false, 00:11:49.210 "supported_io_types": { 00:11:49.210 "read": true, 00:11:49.210 "write": true, 00:11:49.210 "unmap": true, 00:11:49.210 "flush": true, 00:11:49.210 "reset": true, 00:11:49.210 "nvme_admin": false, 00:11:49.210 "nvme_io": false, 00:11:49.210 "nvme_io_md": false, 00:11:49.210 "write_zeroes": true, 00:11:49.210 "zcopy": true, 00:11:49.210 "get_zone_info": false, 00:11:49.210 "zone_management": false, 00:11:49.210 "zone_append": false, 00:11:49.210 "compare": false, 00:11:49.210 "compare_and_write": false, 00:11:49.210 "abort": true, 00:11:49.210 "seek_hole": false, 00:11:49.210 "seek_data": false, 00:11:49.210 "copy": true, 00:11:49.210 "nvme_iov_md": false 00:11:49.210 }, 00:11:49.210 "memory_domains": [ 00:11:49.210 { 00:11:49.210 "dma_device_id": "system", 00:11:49.210 "dma_device_type": 1 00:11:49.210 }, 00:11:49.210 { 00:11:49.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.210 "dma_device_type": 2 00:11:49.210 } 00:11:49.210 ], 00:11:49.210 "driver_specific": {} 00:11:49.210 } 00:11:49.210 ] 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.210 "name": "Existed_Raid", 00:11:49.210 "uuid": "d600d8ae-4ba5-4b2b-9b45-aa29aa14726e", 00:11:49.210 "strip_size_kb": 0, 00:11:49.210 "state": "online", 00:11:49.210 "raid_level": "raid1", 00:11:49.210 "superblock": true, 00:11:49.210 "num_base_bdevs": 4, 00:11:49.210 "num_base_bdevs_discovered": 4, 00:11:49.210 "num_base_bdevs_operational": 4, 00:11:49.210 "base_bdevs_list": [ 00:11:49.210 { 00:11:49.210 "name": "BaseBdev1", 00:11:49.210 "uuid": "00a9da6d-7af1-4595-aff7-a2ef852f7a85", 00:11:49.210 "is_configured": true, 00:11:49.210 "data_offset": 2048, 00:11:49.210 "data_size": 63488 00:11:49.210 }, 00:11:49.210 { 00:11:49.210 "name": "BaseBdev2", 00:11:49.210 "uuid": "6c18d838-f26f-44e0-ae52-ca17db81cf1a", 00:11:49.210 "is_configured": true, 00:11:49.210 "data_offset": 2048, 00:11:49.210 "data_size": 63488 00:11:49.210 }, 00:11:49.210 { 00:11:49.210 "name": "BaseBdev3", 00:11:49.210 "uuid": "2dd08393-b6eb-42b0-bcab-0eba09438a6f", 00:11:49.210 "is_configured": true, 00:11:49.210 "data_offset": 2048, 00:11:49.210 "data_size": 63488 00:11:49.210 }, 00:11:49.210 { 00:11:49.210 "name": "BaseBdev4", 00:11:49.210 "uuid": "62edee0f-dce7-400d-a188-6e135c48756a", 00:11:49.210 "is_configured": true, 00:11:49.210 "data_offset": 2048, 00:11:49.210 "data_size": 63488 00:11:49.210 } 00:11:49.210 ] 00:11:49.210 }' 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.210 09:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.468 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:49.468 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:49.468 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:49.468 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:49.468 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:49.468 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:49.468 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:49.468 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:49.468 09:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.468 09:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.468 [2024-10-15 09:11:07.342202] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:49.727 09:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.727 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:49.727 "name": "Existed_Raid", 00:11:49.727 "aliases": [ 00:11:49.727 "d600d8ae-4ba5-4b2b-9b45-aa29aa14726e" 00:11:49.727 ], 00:11:49.727 "product_name": "Raid Volume", 00:11:49.727 "block_size": 512, 00:11:49.727 "num_blocks": 63488, 00:11:49.727 "uuid": "d600d8ae-4ba5-4b2b-9b45-aa29aa14726e", 00:11:49.727 "assigned_rate_limits": { 00:11:49.727 "rw_ios_per_sec": 0, 00:11:49.727 "rw_mbytes_per_sec": 0, 00:11:49.727 "r_mbytes_per_sec": 0, 00:11:49.727 "w_mbytes_per_sec": 0 00:11:49.727 }, 00:11:49.727 "claimed": false, 00:11:49.727 "zoned": false, 00:11:49.727 "supported_io_types": { 00:11:49.727 "read": true, 00:11:49.727 "write": true, 00:11:49.727 "unmap": false, 00:11:49.727 "flush": false, 00:11:49.727 "reset": true, 00:11:49.727 "nvme_admin": false, 00:11:49.727 "nvme_io": false, 00:11:49.727 "nvme_io_md": false, 00:11:49.727 "write_zeroes": true, 00:11:49.727 "zcopy": false, 00:11:49.727 "get_zone_info": false, 00:11:49.727 "zone_management": false, 00:11:49.727 "zone_append": false, 00:11:49.727 "compare": false, 00:11:49.727 "compare_and_write": false, 00:11:49.727 "abort": false, 00:11:49.727 "seek_hole": false, 00:11:49.727 "seek_data": false, 00:11:49.727 "copy": false, 00:11:49.727 "nvme_iov_md": false 00:11:49.727 }, 00:11:49.727 "memory_domains": [ 00:11:49.727 { 00:11:49.727 "dma_device_id": "system", 00:11:49.727 "dma_device_type": 1 00:11:49.727 }, 00:11:49.727 { 00:11:49.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.727 "dma_device_type": 2 00:11:49.727 }, 00:11:49.727 { 00:11:49.727 "dma_device_id": "system", 00:11:49.727 "dma_device_type": 1 00:11:49.727 }, 00:11:49.727 { 00:11:49.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.727 "dma_device_type": 2 00:11:49.727 }, 00:11:49.727 { 00:11:49.727 "dma_device_id": "system", 00:11:49.727 "dma_device_type": 1 00:11:49.727 }, 00:11:49.727 { 00:11:49.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.727 "dma_device_type": 2 00:11:49.727 }, 00:11:49.727 { 00:11:49.727 "dma_device_id": "system", 00:11:49.727 "dma_device_type": 1 00:11:49.727 }, 00:11:49.727 { 00:11:49.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.727 "dma_device_type": 2 00:11:49.727 } 00:11:49.727 ], 00:11:49.727 "driver_specific": { 00:11:49.727 "raid": { 00:11:49.727 "uuid": "d600d8ae-4ba5-4b2b-9b45-aa29aa14726e", 00:11:49.727 "strip_size_kb": 0, 00:11:49.727 "state": "online", 00:11:49.727 "raid_level": "raid1", 00:11:49.727 "superblock": true, 00:11:49.727 "num_base_bdevs": 4, 00:11:49.727 "num_base_bdevs_discovered": 4, 00:11:49.728 "num_base_bdevs_operational": 4, 00:11:49.728 "base_bdevs_list": [ 00:11:49.728 { 00:11:49.728 "name": "BaseBdev1", 00:11:49.728 "uuid": "00a9da6d-7af1-4595-aff7-a2ef852f7a85", 00:11:49.728 "is_configured": true, 00:11:49.728 "data_offset": 2048, 00:11:49.728 "data_size": 63488 00:11:49.728 }, 00:11:49.728 { 00:11:49.728 "name": "BaseBdev2", 00:11:49.728 "uuid": "6c18d838-f26f-44e0-ae52-ca17db81cf1a", 00:11:49.728 "is_configured": true, 00:11:49.728 "data_offset": 2048, 00:11:49.728 "data_size": 63488 00:11:49.728 }, 00:11:49.728 { 00:11:49.728 "name": "BaseBdev3", 00:11:49.728 "uuid": "2dd08393-b6eb-42b0-bcab-0eba09438a6f", 00:11:49.728 "is_configured": true, 00:11:49.728 "data_offset": 2048, 00:11:49.728 "data_size": 63488 00:11:49.728 }, 00:11:49.728 { 00:11:49.728 "name": "BaseBdev4", 00:11:49.728 "uuid": "62edee0f-dce7-400d-a188-6e135c48756a", 00:11:49.728 "is_configured": true, 00:11:49.728 "data_offset": 2048, 00:11:49.728 "data_size": 63488 00:11:49.728 } 00:11:49.728 ] 00:11:49.728 } 00:11:49.728 } 00:11:49.728 }' 00:11:49.728 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:49.728 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:49.728 BaseBdev2 00:11:49.728 BaseBdev3 00:11:49.728 BaseBdev4' 00:11:49.728 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.728 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:49.728 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:49.728 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:49.728 09:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.728 09:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.728 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.728 09:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.728 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:49.728 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:49.728 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:49.728 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.728 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:49.728 09:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.728 09:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.728 09:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.728 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:49.728 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:49.728 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:49.728 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:49.728 09:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.728 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.728 09:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.728 09:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.728 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:49.728 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:49.728 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:49.728 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:49.728 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.728 09:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.728 09:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.986 09:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.986 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:49.986 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:49.986 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:49.986 09:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.986 09:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.986 [2024-10-15 09:11:07.653363] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:49.986 09:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.986 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:49.986 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:49.986 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:49.986 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:49.986 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:49.986 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:49.986 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.986 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:49.986 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.986 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.986 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:49.986 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.986 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.986 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.986 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.986 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.986 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.986 09:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.986 09:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.986 09:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.986 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.986 "name": "Existed_Raid", 00:11:49.986 "uuid": "d600d8ae-4ba5-4b2b-9b45-aa29aa14726e", 00:11:49.986 "strip_size_kb": 0, 00:11:49.986 "state": "online", 00:11:49.986 "raid_level": "raid1", 00:11:49.986 "superblock": true, 00:11:49.986 "num_base_bdevs": 4, 00:11:49.986 "num_base_bdevs_discovered": 3, 00:11:49.986 "num_base_bdevs_operational": 3, 00:11:49.986 "base_bdevs_list": [ 00:11:49.986 { 00:11:49.986 "name": null, 00:11:49.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.986 "is_configured": false, 00:11:49.986 "data_offset": 0, 00:11:49.986 "data_size": 63488 00:11:49.986 }, 00:11:49.986 { 00:11:49.986 "name": "BaseBdev2", 00:11:49.986 "uuid": "6c18d838-f26f-44e0-ae52-ca17db81cf1a", 00:11:49.986 "is_configured": true, 00:11:49.986 "data_offset": 2048, 00:11:49.986 "data_size": 63488 00:11:49.986 }, 00:11:49.986 { 00:11:49.986 "name": "BaseBdev3", 00:11:49.986 "uuid": "2dd08393-b6eb-42b0-bcab-0eba09438a6f", 00:11:49.986 "is_configured": true, 00:11:49.986 "data_offset": 2048, 00:11:49.986 "data_size": 63488 00:11:49.986 }, 00:11:49.986 { 00:11:49.986 "name": "BaseBdev4", 00:11:49.986 "uuid": "62edee0f-dce7-400d-a188-6e135c48756a", 00:11:49.986 "is_configured": true, 00:11:49.986 "data_offset": 2048, 00:11:49.986 "data_size": 63488 00:11:49.986 } 00:11:49.986 ] 00:11:49.986 }' 00:11:49.986 09:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.986 09:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.553 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:50.553 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:50.553 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:50.553 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.553 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.553 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.553 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.553 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:50.553 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:50.553 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:50.553 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.553 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.553 [2024-10-15 09:11:08.257917] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:50.553 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.553 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:50.553 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:50.553 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.553 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:50.554 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.554 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.554 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.554 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:50.554 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:50.554 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:50.554 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.554 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.554 [2024-10-15 09:11:08.422841] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:50.811 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.811 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:50.811 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:50.811 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.811 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:50.811 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.811 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.811 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.811 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:50.811 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:50.811 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:50.811 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.811 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.811 [2024-10-15 09:11:08.591087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:50.811 [2024-10-15 09:11:08.591232] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:50.811 [2024-10-15 09:11:08.703580] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:50.811 [2024-10-15 09:11:08.703668] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:50.811 [2024-10-15 09:11:08.703682] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:50.811 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.811 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:50.811 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:51.069 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.070 BaseBdev2 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.070 [ 00:11:51.070 { 00:11:51.070 "name": "BaseBdev2", 00:11:51.070 "aliases": [ 00:11:51.070 "1ccf4202-7d93-4c88-9d58-bbeafc5c67a5" 00:11:51.070 ], 00:11:51.070 "product_name": "Malloc disk", 00:11:51.070 "block_size": 512, 00:11:51.070 "num_blocks": 65536, 00:11:51.070 "uuid": "1ccf4202-7d93-4c88-9d58-bbeafc5c67a5", 00:11:51.070 "assigned_rate_limits": { 00:11:51.070 "rw_ios_per_sec": 0, 00:11:51.070 "rw_mbytes_per_sec": 0, 00:11:51.070 "r_mbytes_per_sec": 0, 00:11:51.070 "w_mbytes_per_sec": 0 00:11:51.070 }, 00:11:51.070 "claimed": false, 00:11:51.070 "zoned": false, 00:11:51.070 "supported_io_types": { 00:11:51.070 "read": true, 00:11:51.070 "write": true, 00:11:51.070 "unmap": true, 00:11:51.070 "flush": true, 00:11:51.070 "reset": true, 00:11:51.070 "nvme_admin": false, 00:11:51.070 "nvme_io": false, 00:11:51.070 "nvme_io_md": false, 00:11:51.070 "write_zeroes": true, 00:11:51.070 "zcopy": true, 00:11:51.070 "get_zone_info": false, 00:11:51.070 "zone_management": false, 00:11:51.070 "zone_append": false, 00:11:51.070 "compare": false, 00:11:51.070 "compare_and_write": false, 00:11:51.070 "abort": true, 00:11:51.070 "seek_hole": false, 00:11:51.070 "seek_data": false, 00:11:51.070 "copy": true, 00:11:51.070 "nvme_iov_md": false 00:11:51.070 }, 00:11:51.070 "memory_domains": [ 00:11:51.070 { 00:11:51.070 "dma_device_id": "system", 00:11:51.070 "dma_device_type": 1 00:11:51.070 }, 00:11:51.070 { 00:11:51.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.070 "dma_device_type": 2 00:11:51.070 } 00:11:51.070 ], 00:11:51.070 "driver_specific": {} 00:11:51.070 } 00:11:51.070 ] 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.070 BaseBdev3 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.070 [ 00:11:51.070 { 00:11:51.070 "name": "BaseBdev3", 00:11:51.070 "aliases": [ 00:11:51.070 "e573939b-e57e-476f-bfea-b9e29bb06047" 00:11:51.070 ], 00:11:51.070 "product_name": "Malloc disk", 00:11:51.070 "block_size": 512, 00:11:51.070 "num_blocks": 65536, 00:11:51.070 "uuid": "e573939b-e57e-476f-bfea-b9e29bb06047", 00:11:51.070 "assigned_rate_limits": { 00:11:51.070 "rw_ios_per_sec": 0, 00:11:51.070 "rw_mbytes_per_sec": 0, 00:11:51.070 "r_mbytes_per_sec": 0, 00:11:51.070 "w_mbytes_per_sec": 0 00:11:51.070 }, 00:11:51.070 "claimed": false, 00:11:51.070 "zoned": false, 00:11:51.070 "supported_io_types": { 00:11:51.070 "read": true, 00:11:51.070 "write": true, 00:11:51.070 "unmap": true, 00:11:51.070 "flush": true, 00:11:51.070 "reset": true, 00:11:51.070 "nvme_admin": false, 00:11:51.070 "nvme_io": false, 00:11:51.070 "nvme_io_md": false, 00:11:51.070 "write_zeroes": true, 00:11:51.070 "zcopy": true, 00:11:51.070 "get_zone_info": false, 00:11:51.070 "zone_management": false, 00:11:51.070 "zone_append": false, 00:11:51.070 "compare": false, 00:11:51.070 "compare_and_write": false, 00:11:51.070 "abort": true, 00:11:51.070 "seek_hole": false, 00:11:51.070 "seek_data": false, 00:11:51.070 "copy": true, 00:11:51.070 "nvme_iov_md": false 00:11:51.070 }, 00:11:51.070 "memory_domains": [ 00:11:51.070 { 00:11:51.070 "dma_device_id": "system", 00:11:51.070 "dma_device_type": 1 00:11:51.070 }, 00:11:51.070 { 00:11:51.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.070 "dma_device_type": 2 00:11:51.070 } 00:11:51.070 ], 00:11:51.070 "driver_specific": {} 00:11:51.070 } 00:11:51.070 ] 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.070 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.350 BaseBdev4 00:11:51.350 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.350 09:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:51.350 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:51.350 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:51.350 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:51.350 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:51.350 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:51.350 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:51.350 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.350 09:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.350 09:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.350 09:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:51.350 09:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.350 09:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.350 [ 00:11:51.350 { 00:11:51.350 "name": "BaseBdev4", 00:11:51.350 "aliases": [ 00:11:51.350 "5223e221-3020-4333-bff5-5b3f01db42a8" 00:11:51.350 ], 00:11:51.350 "product_name": "Malloc disk", 00:11:51.350 "block_size": 512, 00:11:51.350 "num_blocks": 65536, 00:11:51.350 "uuid": "5223e221-3020-4333-bff5-5b3f01db42a8", 00:11:51.350 "assigned_rate_limits": { 00:11:51.350 "rw_ios_per_sec": 0, 00:11:51.350 "rw_mbytes_per_sec": 0, 00:11:51.350 "r_mbytes_per_sec": 0, 00:11:51.350 "w_mbytes_per_sec": 0 00:11:51.350 }, 00:11:51.350 "claimed": false, 00:11:51.350 "zoned": false, 00:11:51.350 "supported_io_types": { 00:11:51.350 "read": true, 00:11:51.350 "write": true, 00:11:51.350 "unmap": true, 00:11:51.350 "flush": true, 00:11:51.350 "reset": true, 00:11:51.350 "nvme_admin": false, 00:11:51.350 "nvme_io": false, 00:11:51.350 "nvme_io_md": false, 00:11:51.350 "write_zeroes": true, 00:11:51.350 "zcopy": true, 00:11:51.350 "get_zone_info": false, 00:11:51.350 "zone_management": false, 00:11:51.350 "zone_append": false, 00:11:51.350 "compare": false, 00:11:51.350 "compare_and_write": false, 00:11:51.350 "abort": true, 00:11:51.350 "seek_hole": false, 00:11:51.350 "seek_data": false, 00:11:51.350 "copy": true, 00:11:51.350 "nvme_iov_md": false 00:11:51.350 }, 00:11:51.350 "memory_domains": [ 00:11:51.350 { 00:11:51.350 "dma_device_id": "system", 00:11:51.350 "dma_device_type": 1 00:11:51.350 }, 00:11:51.350 { 00:11:51.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.350 "dma_device_type": 2 00:11:51.350 } 00:11:51.350 ], 00:11:51.350 "driver_specific": {} 00:11:51.350 } 00:11:51.350 ] 00:11:51.350 09:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.350 09:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:51.350 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:51.350 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:51.350 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:51.350 09:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.350 09:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.350 [2024-10-15 09:11:09.036754] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:51.350 [2024-10-15 09:11:09.036948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:51.350 [2024-10-15 09:11:09.036995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:51.350 [2024-10-15 09:11:09.039482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:51.350 [2024-10-15 09:11:09.039620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:51.350 09:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.350 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:51.350 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.350 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.350 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.350 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.350 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.350 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.350 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.350 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.350 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.350 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.350 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.350 09:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.350 09:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.350 09:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.350 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.350 "name": "Existed_Raid", 00:11:51.350 "uuid": "d4bcb387-f39f-4e9f-8950-c2408581ae0a", 00:11:51.350 "strip_size_kb": 0, 00:11:51.350 "state": "configuring", 00:11:51.350 "raid_level": "raid1", 00:11:51.350 "superblock": true, 00:11:51.350 "num_base_bdevs": 4, 00:11:51.350 "num_base_bdevs_discovered": 3, 00:11:51.350 "num_base_bdevs_operational": 4, 00:11:51.350 "base_bdevs_list": [ 00:11:51.350 { 00:11:51.350 "name": "BaseBdev1", 00:11:51.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.350 "is_configured": false, 00:11:51.350 "data_offset": 0, 00:11:51.350 "data_size": 0 00:11:51.350 }, 00:11:51.350 { 00:11:51.350 "name": "BaseBdev2", 00:11:51.350 "uuid": "1ccf4202-7d93-4c88-9d58-bbeafc5c67a5", 00:11:51.350 "is_configured": true, 00:11:51.350 "data_offset": 2048, 00:11:51.350 "data_size": 63488 00:11:51.350 }, 00:11:51.350 { 00:11:51.350 "name": "BaseBdev3", 00:11:51.350 "uuid": "e573939b-e57e-476f-bfea-b9e29bb06047", 00:11:51.350 "is_configured": true, 00:11:51.350 "data_offset": 2048, 00:11:51.350 "data_size": 63488 00:11:51.350 }, 00:11:51.350 { 00:11:51.350 "name": "BaseBdev4", 00:11:51.351 "uuid": "5223e221-3020-4333-bff5-5b3f01db42a8", 00:11:51.351 "is_configured": true, 00:11:51.351 "data_offset": 2048, 00:11:51.351 "data_size": 63488 00:11:51.351 } 00:11:51.351 ] 00:11:51.351 }' 00:11:51.351 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.351 09:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.619 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:51.619 09:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.619 09:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.619 [2024-10-15 09:11:09.503985] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:51.619 09:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.619 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:51.620 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.620 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.620 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.620 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.620 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.620 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.620 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.620 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.620 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.877 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.877 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.877 09:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.877 09:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.877 09:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.877 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.877 "name": "Existed_Raid", 00:11:51.877 "uuid": "d4bcb387-f39f-4e9f-8950-c2408581ae0a", 00:11:51.877 "strip_size_kb": 0, 00:11:51.877 "state": "configuring", 00:11:51.877 "raid_level": "raid1", 00:11:51.877 "superblock": true, 00:11:51.877 "num_base_bdevs": 4, 00:11:51.877 "num_base_bdevs_discovered": 2, 00:11:51.877 "num_base_bdevs_operational": 4, 00:11:51.877 "base_bdevs_list": [ 00:11:51.877 { 00:11:51.877 "name": "BaseBdev1", 00:11:51.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.877 "is_configured": false, 00:11:51.877 "data_offset": 0, 00:11:51.877 "data_size": 0 00:11:51.877 }, 00:11:51.877 { 00:11:51.877 "name": null, 00:11:51.877 "uuid": "1ccf4202-7d93-4c88-9d58-bbeafc5c67a5", 00:11:51.877 "is_configured": false, 00:11:51.877 "data_offset": 0, 00:11:51.877 "data_size": 63488 00:11:51.877 }, 00:11:51.877 { 00:11:51.877 "name": "BaseBdev3", 00:11:51.877 "uuid": "e573939b-e57e-476f-bfea-b9e29bb06047", 00:11:51.877 "is_configured": true, 00:11:51.877 "data_offset": 2048, 00:11:51.877 "data_size": 63488 00:11:51.877 }, 00:11:51.877 { 00:11:51.877 "name": "BaseBdev4", 00:11:51.877 "uuid": "5223e221-3020-4333-bff5-5b3f01db42a8", 00:11:51.877 "is_configured": true, 00:11:51.877 "data_offset": 2048, 00:11:51.877 "data_size": 63488 00:11:51.877 } 00:11:51.877 ] 00:11:51.877 }' 00:11:51.877 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.877 09:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.136 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:52.136 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.136 09:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.136 09:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.136 09:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.136 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:52.136 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:52.136 09:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.136 09:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.136 [2024-10-15 09:11:09.994232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:52.136 BaseBdev1 00:11:52.136 09:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.136 09:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:52.136 09:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:52.136 09:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:52.136 09:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:52.136 09:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:52.136 09:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:52.136 09:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:52.136 09:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.136 09:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.136 09:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.136 09:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:52.136 09:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.136 09:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.136 [ 00:11:52.136 { 00:11:52.136 "name": "BaseBdev1", 00:11:52.136 "aliases": [ 00:11:52.136 "86f77eb4-6b58-44bc-9f28-7de51453020a" 00:11:52.136 ], 00:11:52.136 "product_name": "Malloc disk", 00:11:52.136 "block_size": 512, 00:11:52.136 "num_blocks": 65536, 00:11:52.136 "uuid": "86f77eb4-6b58-44bc-9f28-7de51453020a", 00:11:52.136 "assigned_rate_limits": { 00:11:52.136 "rw_ios_per_sec": 0, 00:11:52.136 "rw_mbytes_per_sec": 0, 00:11:52.136 "r_mbytes_per_sec": 0, 00:11:52.136 "w_mbytes_per_sec": 0 00:11:52.136 }, 00:11:52.136 "claimed": true, 00:11:52.136 "claim_type": "exclusive_write", 00:11:52.136 "zoned": false, 00:11:52.136 "supported_io_types": { 00:11:52.136 "read": true, 00:11:52.136 "write": true, 00:11:52.136 "unmap": true, 00:11:52.136 "flush": true, 00:11:52.136 "reset": true, 00:11:52.136 "nvme_admin": false, 00:11:52.136 "nvme_io": false, 00:11:52.136 "nvme_io_md": false, 00:11:52.136 "write_zeroes": true, 00:11:52.136 "zcopy": true, 00:11:52.136 "get_zone_info": false, 00:11:52.136 "zone_management": false, 00:11:52.136 "zone_append": false, 00:11:52.395 "compare": false, 00:11:52.395 "compare_and_write": false, 00:11:52.395 "abort": true, 00:11:52.395 "seek_hole": false, 00:11:52.395 "seek_data": false, 00:11:52.395 "copy": true, 00:11:52.395 "nvme_iov_md": false 00:11:52.395 }, 00:11:52.395 "memory_domains": [ 00:11:52.395 { 00:11:52.395 "dma_device_id": "system", 00:11:52.395 "dma_device_type": 1 00:11:52.395 }, 00:11:52.395 { 00:11:52.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.395 "dma_device_type": 2 00:11:52.395 } 00:11:52.395 ], 00:11:52.395 "driver_specific": {} 00:11:52.395 } 00:11:52.395 ] 00:11:52.395 09:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.395 09:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:52.395 09:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:52.395 09:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.395 09:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.395 09:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.395 09:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.395 09:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.395 09:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.395 09:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.395 09:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.395 09:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.395 09:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.395 09:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.395 09:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.395 09:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.395 09:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.395 09:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.395 "name": "Existed_Raid", 00:11:52.395 "uuid": "d4bcb387-f39f-4e9f-8950-c2408581ae0a", 00:11:52.395 "strip_size_kb": 0, 00:11:52.395 "state": "configuring", 00:11:52.395 "raid_level": "raid1", 00:11:52.395 "superblock": true, 00:11:52.395 "num_base_bdevs": 4, 00:11:52.395 "num_base_bdevs_discovered": 3, 00:11:52.395 "num_base_bdevs_operational": 4, 00:11:52.395 "base_bdevs_list": [ 00:11:52.395 { 00:11:52.395 "name": "BaseBdev1", 00:11:52.395 "uuid": "86f77eb4-6b58-44bc-9f28-7de51453020a", 00:11:52.395 "is_configured": true, 00:11:52.395 "data_offset": 2048, 00:11:52.395 "data_size": 63488 00:11:52.395 }, 00:11:52.395 { 00:11:52.395 "name": null, 00:11:52.395 "uuid": "1ccf4202-7d93-4c88-9d58-bbeafc5c67a5", 00:11:52.395 "is_configured": false, 00:11:52.395 "data_offset": 0, 00:11:52.395 "data_size": 63488 00:11:52.395 }, 00:11:52.395 { 00:11:52.395 "name": "BaseBdev3", 00:11:52.395 "uuid": "e573939b-e57e-476f-bfea-b9e29bb06047", 00:11:52.395 "is_configured": true, 00:11:52.395 "data_offset": 2048, 00:11:52.395 "data_size": 63488 00:11:52.395 }, 00:11:52.395 { 00:11:52.395 "name": "BaseBdev4", 00:11:52.395 "uuid": "5223e221-3020-4333-bff5-5b3f01db42a8", 00:11:52.395 "is_configured": true, 00:11:52.395 "data_offset": 2048, 00:11:52.395 "data_size": 63488 00:11:52.395 } 00:11:52.395 ] 00:11:52.395 }' 00:11:52.395 09:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.395 09:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.653 09:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.653 09:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.653 09:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.653 09:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:52.653 09:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.913 09:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:52.913 09:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:52.913 09:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.913 09:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.913 [2024-10-15 09:11:10.557416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:52.913 09:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.913 09:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:52.913 09:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.913 09:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.913 09:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.913 09:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.913 09:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.913 09:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.913 09:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.913 09:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.913 09:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.913 09:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.913 09:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.913 09:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.913 09:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.913 09:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.913 09:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.913 "name": "Existed_Raid", 00:11:52.913 "uuid": "d4bcb387-f39f-4e9f-8950-c2408581ae0a", 00:11:52.913 "strip_size_kb": 0, 00:11:52.913 "state": "configuring", 00:11:52.913 "raid_level": "raid1", 00:11:52.913 "superblock": true, 00:11:52.913 "num_base_bdevs": 4, 00:11:52.913 "num_base_bdevs_discovered": 2, 00:11:52.913 "num_base_bdevs_operational": 4, 00:11:52.913 "base_bdevs_list": [ 00:11:52.913 { 00:11:52.913 "name": "BaseBdev1", 00:11:52.913 "uuid": "86f77eb4-6b58-44bc-9f28-7de51453020a", 00:11:52.913 "is_configured": true, 00:11:52.913 "data_offset": 2048, 00:11:52.913 "data_size": 63488 00:11:52.913 }, 00:11:52.913 { 00:11:52.913 "name": null, 00:11:52.913 "uuid": "1ccf4202-7d93-4c88-9d58-bbeafc5c67a5", 00:11:52.913 "is_configured": false, 00:11:52.913 "data_offset": 0, 00:11:52.913 "data_size": 63488 00:11:52.913 }, 00:11:52.913 { 00:11:52.913 "name": null, 00:11:52.913 "uuid": "e573939b-e57e-476f-bfea-b9e29bb06047", 00:11:52.913 "is_configured": false, 00:11:52.913 "data_offset": 0, 00:11:52.913 "data_size": 63488 00:11:52.913 }, 00:11:52.913 { 00:11:52.913 "name": "BaseBdev4", 00:11:52.913 "uuid": "5223e221-3020-4333-bff5-5b3f01db42a8", 00:11:52.913 "is_configured": true, 00:11:52.913 "data_offset": 2048, 00:11:52.913 "data_size": 63488 00:11:52.913 } 00:11:52.913 ] 00:11:52.913 }' 00:11:52.913 09:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.913 09:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.173 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.173 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:53.173 09:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.173 09:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.173 09:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.434 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:53.434 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:53.434 09:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.434 09:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.434 [2024-10-15 09:11:11.076598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:53.434 09:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.434 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:53.434 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.434 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.434 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.434 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.434 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.434 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.434 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.434 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.434 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.434 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.434 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.434 09:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.434 09:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.434 09:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.434 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.434 "name": "Existed_Raid", 00:11:53.434 "uuid": "d4bcb387-f39f-4e9f-8950-c2408581ae0a", 00:11:53.434 "strip_size_kb": 0, 00:11:53.434 "state": "configuring", 00:11:53.434 "raid_level": "raid1", 00:11:53.434 "superblock": true, 00:11:53.434 "num_base_bdevs": 4, 00:11:53.434 "num_base_bdevs_discovered": 3, 00:11:53.434 "num_base_bdevs_operational": 4, 00:11:53.434 "base_bdevs_list": [ 00:11:53.434 { 00:11:53.434 "name": "BaseBdev1", 00:11:53.434 "uuid": "86f77eb4-6b58-44bc-9f28-7de51453020a", 00:11:53.434 "is_configured": true, 00:11:53.434 "data_offset": 2048, 00:11:53.434 "data_size": 63488 00:11:53.434 }, 00:11:53.434 { 00:11:53.434 "name": null, 00:11:53.434 "uuid": "1ccf4202-7d93-4c88-9d58-bbeafc5c67a5", 00:11:53.434 "is_configured": false, 00:11:53.434 "data_offset": 0, 00:11:53.434 "data_size": 63488 00:11:53.434 }, 00:11:53.434 { 00:11:53.434 "name": "BaseBdev3", 00:11:53.434 "uuid": "e573939b-e57e-476f-bfea-b9e29bb06047", 00:11:53.434 "is_configured": true, 00:11:53.434 "data_offset": 2048, 00:11:53.434 "data_size": 63488 00:11:53.434 }, 00:11:53.434 { 00:11:53.434 "name": "BaseBdev4", 00:11:53.434 "uuid": "5223e221-3020-4333-bff5-5b3f01db42a8", 00:11:53.434 "is_configured": true, 00:11:53.434 "data_offset": 2048, 00:11:53.434 "data_size": 63488 00:11:53.434 } 00:11:53.434 ] 00:11:53.434 }' 00:11:53.434 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.434 09:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.694 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:53.694 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.694 09:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.694 09:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.694 09:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.694 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:53.694 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:53.694 09:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.694 09:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.694 [2024-10-15 09:11:11.571843] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:53.955 09:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.955 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:53.955 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.955 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.955 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.955 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.955 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.955 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.955 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.955 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.955 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.955 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.955 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.955 09:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.955 09:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.955 09:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.955 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.955 "name": "Existed_Raid", 00:11:53.955 "uuid": "d4bcb387-f39f-4e9f-8950-c2408581ae0a", 00:11:53.955 "strip_size_kb": 0, 00:11:53.955 "state": "configuring", 00:11:53.955 "raid_level": "raid1", 00:11:53.955 "superblock": true, 00:11:53.955 "num_base_bdevs": 4, 00:11:53.955 "num_base_bdevs_discovered": 2, 00:11:53.955 "num_base_bdevs_operational": 4, 00:11:53.955 "base_bdevs_list": [ 00:11:53.955 { 00:11:53.955 "name": null, 00:11:53.955 "uuid": "86f77eb4-6b58-44bc-9f28-7de51453020a", 00:11:53.955 "is_configured": false, 00:11:53.955 "data_offset": 0, 00:11:53.955 "data_size": 63488 00:11:53.955 }, 00:11:53.955 { 00:11:53.955 "name": null, 00:11:53.955 "uuid": "1ccf4202-7d93-4c88-9d58-bbeafc5c67a5", 00:11:53.955 "is_configured": false, 00:11:53.955 "data_offset": 0, 00:11:53.955 "data_size": 63488 00:11:53.955 }, 00:11:53.955 { 00:11:53.955 "name": "BaseBdev3", 00:11:53.955 "uuid": "e573939b-e57e-476f-bfea-b9e29bb06047", 00:11:53.955 "is_configured": true, 00:11:53.955 "data_offset": 2048, 00:11:53.955 "data_size": 63488 00:11:53.955 }, 00:11:53.955 { 00:11:53.955 "name": "BaseBdev4", 00:11:53.955 "uuid": "5223e221-3020-4333-bff5-5b3f01db42a8", 00:11:53.955 "is_configured": true, 00:11:53.955 "data_offset": 2048, 00:11:53.955 "data_size": 63488 00:11:53.955 } 00:11:53.955 ] 00:11:53.955 }' 00:11:53.955 09:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.955 09:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.523 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.523 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:54.523 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.523 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.523 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.524 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:54.524 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:54.524 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.524 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.524 [2024-10-15 09:11:12.220655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:54.524 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.524 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:54.524 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.524 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.524 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.524 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.524 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.524 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.524 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.524 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.524 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.524 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.524 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.524 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.524 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.524 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.524 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.524 "name": "Existed_Raid", 00:11:54.524 "uuid": "d4bcb387-f39f-4e9f-8950-c2408581ae0a", 00:11:54.524 "strip_size_kb": 0, 00:11:54.524 "state": "configuring", 00:11:54.524 "raid_level": "raid1", 00:11:54.524 "superblock": true, 00:11:54.524 "num_base_bdevs": 4, 00:11:54.524 "num_base_bdevs_discovered": 3, 00:11:54.524 "num_base_bdevs_operational": 4, 00:11:54.524 "base_bdevs_list": [ 00:11:54.524 { 00:11:54.524 "name": null, 00:11:54.524 "uuid": "86f77eb4-6b58-44bc-9f28-7de51453020a", 00:11:54.524 "is_configured": false, 00:11:54.524 "data_offset": 0, 00:11:54.524 "data_size": 63488 00:11:54.524 }, 00:11:54.524 { 00:11:54.524 "name": "BaseBdev2", 00:11:54.524 "uuid": "1ccf4202-7d93-4c88-9d58-bbeafc5c67a5", 00:11:54.524 "is_configured": true, 00:11:54.524 "data_offset": 2048, 00:11:54.524 "data_size": 63488 00:11:54.524 }, 00:11:54.524 { 00:11:54.524 "name": "BaseBdev3", 00:11:54.524 "uuid": "e573939b-e57e-476f-bfea-b9e29bb06047", 00:11:54.524 "is_configured": true, 00:11:54.524 "data_offset": 2048, 00:11:54.524 "data_size": 63488 00:11:54.524 }, 00:11:54.524 { 00:11:54.524 "name": "BaseBdev4", 00:11:54.524 "uuid": "5223e221-3020-4333-bff5-5b3f01db42a8", 00:11:54.524 "is_configured": true, 00:11:54.524 "data_offset": 2048, 00:11:54.524 "data_size": 63488 00:11:54.524 } 00:11:54.524 ] 00:11:54.524 }' 00:11:54.524 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.524 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.824 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.824 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.824 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.824 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:54.824 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 86f77eb4-6b58-44bc-9f28-7de51453020a 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.085 [2024-10-15 09:11:12.815352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:55.085 [2024-10-15 09:11:12.815782] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:55.085 [2024-10-15 09:11:12.815809] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:55.085 [2024-10-15 09:11:12.816121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:55.085 [2024-10-15 09:11:12.816309] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:55.085 [2024-10-15 09:11:12.816319] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:55.085 [2024-10-15 09:11:12.816472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.085 NewBaseBdev 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.085 [ 00:11:55.085 { 00:11:55.085 "name": "NewBaseBdev", 00:11:55.085 "aliases": [ 00:11:55.085 "86f77eb4-6b58-44bc-9f28-7de51453020a" 00:11:55.085 ], 00:11:55.085 "product_name": "Malloc disk", 00:11:55.085 "block_size": 512, 00:11:55.085 "num_blocks": 65536, 00:11:55.085 "uuid": "86f77eb4-6b58-44bc-9f28-7de51453020a", 00:11:55.085 "assigned_rate_limits": { 00:11:55.085 "rw_ios_per_sec": 0, 00:11:55.085 "rw_mbytes_per_sec": 0, 00:11:55.085 "r_mbytes_per_sec": 0, 00:11:55.085 "w_mbytes_per_sec": 0 00:11:55.085 }, 00:11:55.085 "claimed": true, 00:11:55.085 "claim_type": "exclusive_write", 00:11:55.085 "zoned": false, 00:11:55.085 "supported_io_types": { 00:11:55.085 "read": true, 00:11:55.085 "write": true, 00:11:55.085 "unmap": true, 00:11:55.085 "flush": true, 00:11:55.085 "reset": true, 00:11:55.085 "nvme_admin": false, 00:11:55.085 "nvme_io": false, 00:11:55.085 "nvme_io_md": false, 00:11:55.085 "write_zeroes": true, 00:11:55.085 "zcopy": true, 00:11:55.085 "get_zone_info": false, 00:11:55.085 "zone_management": false, 00:11:55.085 "zone_append": false, 00:11:55.085 "compare": false, 00:11:55.085 "compare_and_write": false, 00:11:55.085 "abort": true, 00:11:55.085 "seek_hole": false, 00:11:55.085 "seek_data": false, 00:11:55.085 "copy": true, 00:11:55.085 "nvme_iov_md": false 00:11:55.085 }, 00:11:55.085 "memory_domains": [ 00:11:55.085 { 00:11:55.085 "dma_device_id": "system", 00:11:55.085 "dma_device_type": 1 00:11:55.085 }, 00:11:55.085 { 00:11:55.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.085 "dma_device_type": 2 00:11:55.085 } 00:11:55.085 ], 00:11:55.085 "driver_specific": {} 00:11:55.085 } 00:11:55.085 ] 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.085 "name": "Existed_Raid", 00:11:55.085 "uuid": "d4bcb387-f39f-4e9f-8950-c2408581ae0a", 00:11:55.085 "strip_size_kb": 0, 00:11:55.085 "state": "online", 00:11:55.085 "raid_level": "raid1", 00:11:55.085 "superblock": true, 00:11:55.085 "num_base_bdevs": 4, 00:11:55.085 "num_base_bdevs_discovered": 4, 00:11:55.085 "num_base_bdevs_operational": 4, 00:11:55.085 "base_bdevs_list": [ 00:11:55.085 { 00:11:55.085 "name": "NewBaseBdev", 00:11:55.085 "uuid": "86f77eb4-6b58-44bc-9f28-7de51453020a", 00:11:55.085 "is_configured": true, 00:11:55.085 "data_offset": 2048, 00:11:55.085 "data_size": 63488 00:11:55.085 }, 00:11:55.085 { 00:11:55.085 "name": "BaseBdev2", 00:11:55.085 "uuid": "1ccf4202-7d93-4c88-9d58-bbeafc5c67a5", 00:11:55.085 "is_configured": true, 00:11:55.085 "data_offset": 2048, 00:11:55.085 "data_size": 63488 00:11:55.085 }, 00:11:55.085 { 00:11:55.085 "name": "BaseBdev3", 00:11:55.085 "uuid": "e573939b-e57e-476f-bfea-b9e29bb06047", 00:11:55.085 "is_configured": true, 00:11:55.085 "data_offset": 2048, 00:11:55.085 "data_size": 63488 00:11:55.085 }, 00:11:55.085 { 00:11:55.085 "name": "BaseBdev4", 00:11:55.085 "uuid": "5223e221-3020-4333-bff5-5b3f01db42a8", 00:11:55.085 "is_configured": true, 00:11:55.085 "data_offset": 2048, 00:11:55.085 "data_size": 63488 00:11:55.085 } 00:11:55.085 ] 00:11:55.085 }' 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.085 09:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.654 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:55.654 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:55.654 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:55.654 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:55.654 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:55.654 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:55.654 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:55.655 09:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.655 09:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.655 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:55.655 [2024-10-15 09:11:13.338953] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:55.655 09:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.655 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:55.655 "name": "Existed_Raid", 00:11:55.655 "aliases": [ 00:11:55.655 "d4bcb387-f39f-4e9f-8950-c2408581ae0a" 00:11:55.655 ], 00:11:55.655 "product_name": "Raid Volume", 00:11:55.655 "block_size": 512, 00:11:55.655 "num_blocks": 63488, 00:11:55.655 "uuid": "d4bcb387-f39f-4e9f-8950-c2408581ae0a", 00:11:55.655 "assigned_rate_limits": { 00:11:55.655 "rw_ios_per_sec": 0, 00:11:55.655 "rw_mbytes_per_sec": 0, 00:11:55.655 "r_mbytes_per_sec": 0, 00:11:55.655 "w_mbytes_per_sec": 0 00:11:55.655 }, 00:11:55.655 "claimed": false, 00:11:55.655 "zoned": false, 00:11:55.655 "supported_io_types": { 00:11:55.655 "read": true, 00:11:55.655 "write": true, 00:11:55.655 "unmap": false, 00:11:55.655 "flush": false, 00:11:55.655 "reset": true, 00:11:55.655 "nvme_admin": false, 00:11:55.655 "nvme_io": false, 00:11:55.655 "nvme_io_md": false, 00:11:55.655 "write_zeroes": true, 00:11:55.655 "zcopy": false, 00:11:55.655 "get_zone_info": false, 00:11:55.655 "zone_management": false, 00:11:55.655 "zone_append": false, 00:11:55.655 "compare": false, 00:11:55.655 "compare_and_write": false, 00:11:55.655 "abort": false, 00:11:55.655 "seek_hole": false, 00:11:55.655 "seek_data": false, 00:11:55.655 "copy": false, 00:11:55.655 "nvme_iov_md": false 00:11:55.655 }, 00:11:55.655 "memory_domains": [ 00:11:55.655 { 00:11:55.655 "dma_device_id": "system", 00:11:55.655 "dma_device_type": 1 00:11:55.655 }, 00:11:55.655 { 00:11:55.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.655 "dma_device_type": 2 00:11:55.655 }, 00:11:55.655 { 00:11:55.655 "dma_device_id": "system", 00:11:55.655 "dma_device_type": 1 00:11:55.655 }, 00:11:55.655 { 00:11:55.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.655 "dma_device_type": 2 00:11:55.655 }, 00:11:55.655 { 00:11:55.655 "dma_device_id": "system", 00:11:55.655 "dma_device_type": 1 00:11:55.655 }, 00:11:55.655 { 00:11:55.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.655 "dma_device_type": 2 00:11:55.655 }, 00:11:55.655 { 00:11:55.655 "dma_device_id": "system", 00:11:55.655 "dma_device_type": 1 00:11:55.655 }, 00:11:55.655 { 00:11:55.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.655 "dma_device_type": 2 00:11:55.655 } 00:11:55.655 ], 00:11:55.655 "driver_specific": { 00:11:55.655 "raid": { 00:11:55.655 "uuid": "d4bcb387-f39f-4e9f-8950-c2408581ae0a", 00:11:55.655 "strip_size_kb": 0, 00:11:55.655 "state": "online", 00:11:55.655 "raid_level": "raid1", 00:11:55.655 "superblock": true, 00:11:55.655 "num_base_bdevs": 4, 00:11:55.655 "num_base_bdevs_discovered": 4, 00:11:55.655 "num_base_bdevs_operational": 4, 00:11:55.655 "base_bdevs_list": [ 00:11:55.655 { 00:11:55.655 "name": "NewBaseBdev", 00:11:55.655 "uuid": "86f77eb4-6b58-44bc-9f28-7de51453020a", 00:11:55.655 "is_configured": true, 00:11:55.655 "data_offset": 2048, 00:11:55.655 "data_size": 63488 00:11:55.655 }, 00:11:55.655 { 00:11:55.655 "name": "BaseBdev2", 00:11:55.655 "uuid": "1ccf4202-7d93-4c88-9d58-bbeafc5c67a5", 00:11:55.655 "is_configured": true, 00:11:55.655 "data_offset": 2048, 00:11:55.655 "data_size": 63488 00:11:55.655 }, 00:11:55.655 { 00:11:55.655 "name": "BaseBdev3", 00:11:55.655 "uuid": "e573939b-e57e-476f-bfea-b9e29bb06047", 00:11:55.655 "is_configured": true, 00:11:55.655 "data_offset": 2048, 00:11:55.655 "data_size": 63488 00:11:55.655 }, 00:11:55.655 { 00:11:55.655 "name": "BaseBdev4", 00:11:55.655 "uuid": "5223e221-3020-4333-bff5-5b3f01db42a8", 00:11:55.655 "is_configured": true, 00:11:55.655 "data_offset": 2048, 00:11:55.655 "data_size": 63488 00:11:55.655 } 00:11:55.655 ] 00:11:55.655 } 00:11:55.655 } 00:11:55.655 }' 00:11:55.655 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:55.655 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:55.655 BaseBdev2 00:11:55.655 BaseBdev3 00:11:55.655 BaseBdev4' 00:11:55.655 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:55.655 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:55.655 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:55.655 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:55.655 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:55.655 09:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.655 09:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.655 09:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.655 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:55.655 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:55.655 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:55.655 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:55.655 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:55.655 09:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.655 09:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.915 09:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.915 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:55.915 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:55.915 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:55.915 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:55.915 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:55.915 09:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.915 09:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.915 09:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.916 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:55.916 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:55.916 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:55.916 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:55.916 09:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.916 09:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.916 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:55.916 09:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.916 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:55.916 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:55.916 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:55.916 09:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.916 09:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.916 [2024-10-15 09:11:13.697973] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:55.916 [2024-10-15 09:11:13.698121] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:55.916 [2024-10-15 09:11:13.698246] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:55.916 [2024-10-15 09:11:13.698601] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:55.916 [2024-10-15 09:11:13.698618] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:55.916 09:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.916 09:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73967 00:11:55.916 09:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 73967 ']' 00:11:55.916 09:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 73967 00:11:55.916 09:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:55.916 09:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:55.916 09:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73967 00:11:55.916 09:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:55.916 killing process with pid 73967 00:11:55.916 09:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:55.916 09:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73967' 00:11:55.916 09:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 73967 00:11:55.916 [2024-10-15 09:11:13.747546] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:55.916 09:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 73967 00:11:56.484 [2024-10-15 09:11:14.206732] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:57.864 09:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:57.864 ************************************ 00:11:57.864 END TEST raid_state_function_test_sb 00:11:57.864 ************************************ 00:11:57.864 00:11:57.864 real 0m12.224s 00:11:57.864 user 0m19.172s 00:11:57.864 sys 0m2.244s 00:11:57.864 09:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:57.864 09:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.864 09:11:15 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:57.864 09:11:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:57.864 09:11:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:57.864 09:11:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:57.864 ************************************ 00:11:57.864 START TEST raid_superblock_test 00:11:57.864 ************************************ 00:11:57.864 09:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:11:57.864 09:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:57.864 09:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:57.864 09:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:57.864 09:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:57.864 09:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:57.864 09:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:57.864 09:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:57.864 09:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:57.864 09:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:57.864 09:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:57.864 09:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:57.864 09:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:57.864 09:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:57.864 09:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:57.864 09:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:57.864 09:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74640 00:11:57.864 09:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:57.864 09:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74640 00:11:57.864 09:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 74640 ']' 00:11:57.864 09:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.864 09:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:57.864 09:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.865 09:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:57.865 09:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.865 [2024-10-15 09:11:15.675776] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:11:57.865 [2024-10-15 09:11:15.675935] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74640 ] 00:11:58.123 [2024-10-15 09:11:15.844257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.123 [2024-10-15 09:11:15.994450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.436 [2024-10-15 09:11:16.246849] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:58.436 [2024-10-15 09:11:16.246916] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:58.696 09:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:58.696 09:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:58.696 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:58.696 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:58.696 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:58.696 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:58.696 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:58.696 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:58.696 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:58.696 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:58.696 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:58.696 09:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.696 09:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.696 malloc1 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.956 [2024-10-15 09:11:16.598838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:58.956 [2024-10-15 09:11:16.599039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.956 [2024-10-15 09:11:16.599091] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:58.956 [2024-10-15 09:11:16.599130] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.956 [2024-10-15 09:11:16.602020] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.956 [2024-10-15 09:11:16.602162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:58.956 pt1 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.956 malloc2 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.956 [2024-10-15 09:11:16.668380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:58.956 [2024-10-15 09:11:16.668459] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.956 [2024-10-15 09:11:16.668488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:58.956 [2024-10-15 09:11:16.668500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.956 [2024-10-15 09:11:16.671125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.956 [2024-10-15 09:11:16.671248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:58.956 pt2 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.956 malloc3 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.956 [2024-10-15 09:11:16.757374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:58.956 [2024-10-15 09:11:16.757534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.956 [2024-10-15 09:11:16.757579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:58.956 [2024-10-15 09:11:16.757611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.956 [2024-10-15 09:11:16.760264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.956 [2024-10-15 09:11:16.760347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:58.956 pt3 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.956 malloc4 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.956 [2024-10-15 09:11:16.825461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:58.956 [2024-10-15 09:11:16.825634] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.956 [2024-10-15 09:11:16.825664] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:58.956 [2024-10-15 09:11:16.825675] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.956 [2024-10-15 09:11:16.828289] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.956 [2024-10-15 09:11:16.828333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:58.956 pt4 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.956 09:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.956 [2024-10-15 09:11:16.837501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:58.956 [2024-10-15 09:11:16.839629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:58.956 [2024-10-15 09:11:16.839713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:58.956 [2024-10-15 09:11:16.839757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:58.956 [2024-10-15 09:11:16.839989] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:58.957 [2024-10-15 09:11:16.840002] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:58.957 [2024-10-15 09:11:16.840343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:58.957 [2024-10-15 09:11:16.840543] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:58.957 [2024-10-15 09:11:16.840559] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:58.957 [2024-10-15 09:11:16.840762] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.957 09:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.957 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:58.957 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.957 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.957 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.957 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.957 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.957 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.957 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.957 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.957 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.957 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.957 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.957 09:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.216 09:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.216 09:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.216 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.216 "name": "raid_bdev1", 00:11:59.216 "uuid": "a29b00b8-98d2-476e-8dbc-bcb854a9c9b6", 00:11:59.216 "strip_size_kb": 0, 00:11:59.216 "state": "online", 00:11:59.216 "raid_level": "raid1", 00:11:59.216 "superblock": true, 00:11:59.216 "num_base_bdevs": 4, 00:11:59.216 "num_base_bdevs_discovered": 4, 00:11:59.216 "num_base_bdevs_operational": 4, 00:11:59.216 "base_bdevs_list": [ 00:11:59.216 { 00:11:59.216 "name": "pt1", 00:11:59.216 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:59.216 "is_configured": true, 00:11:59.216 "data_offset": 2048, 00:11:59.216 "data_size": 63488 00:11:59.216 }, 00:11:59.216 { 00:11:59.216 "name": "pt2", 00:11:59.216 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:59.216 "is_configured": true, 00:11:59.216 "data_offset": 2048, 00:11:59.216 "data_size": 63488 00:11:59.216 }, 00:11:59.216 { 00:11:59.216 "name": "pt3", 00:11:59.216 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:59.216 "is_configured": true, 00:11:59.216 "data_offset": 2048, 00:11:59.216 "data_size": 63488 00:11:59.216 }, 00:11:59.216 { 00:11:59.216 "name": "pt4", 00:11:59.216 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:59.216 "is_configured": true, 00:11:59.216 "data_offset": 2048, 00:11:59.216 "data_size": 63488 00:11:59.216 } 00:11:59.216 ] 00:11:59.216 }' 00:11:59.216 09:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.216 09:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.475 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:59.475 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:59.475 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:59.475 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:59.475 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:59.475 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:59.475 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:59.475 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:59.475 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.475 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.475 [2024-10-15 09:11:17.309093] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:59.475 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.475 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:59.475 "name": "raid_bdev1", 00:11:59.475 "aliases": [ 00:11:59.475 "a29b00b8-98d2-476e-8dbc-bcb854a9c9b6" 00:11:59.475 ], 00:11:59.475 "product_name": "Raid Volume", 00:11:59.475 "block_size": 512, 00:11:59.475 "num_blocks": 63488, 00:11:59.475 "uuid": "a29b00b8-98d2-476e-8dbc-bcb854a9c9b6", 00:11:59.475 "assigned_rate_limits": { 00:11:59.475 "rw_ios_per_sec": 0, 00:11:59.475 "rw_mbytes_per_sec": 0, 00:11:59.475 "r_mbytes_per_sec": 0, 00:11:59.475 "w_mbytes_per_sec": 0 00:11:59.475 }, 00:11:59.475 "claimed": false, 00:11:59.475 "zoned": false, 00:11:59.475 "supported_io_types": { 00:11:59.475 "read": true, 00:11:59.475 "write": true, 00:11:59.475 "unmap": false, 00:11:59.475 "flush": false, 00:11:59.475 "reset": true, 00:11:59.475 "nvme_admin": false, 00:11:59.475 "nvme_io": false, 00:11:59.475 "nvme_io_md": false, 00:11:59.475 "write_zeroes": true, 00:11:59.475 "zcopy": false, 00:11:59.475 "get_zone_info": false, 00:11:59.475 "zone_management": false, 00:11:59.475 "zone_append": false, 00:11:59.475 "compare": false, 00:11:59.475 "compare_and_write": false, 00:11:59.475 "abort": false, 00:11:59.475 "seek_hole": false, 00:11:59.475 "seek_data": false, 00:11:59.475 "copy": false, 00:11:59.475 "nvme_iov_md": false 00:11:59.475 }, 00:11:59.475 "memory_domains": [ 00:11:59.475 { 00:11:59.475 "dma_device_id": "system", 00:11:59.475 "dma_device_type": 1 00:11:59.475 }, 00:11:59.475 { 00:11:59.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.475 "dma_device_type": 2 00:11:59.475 }, 00:11:59.475 { 00:11:59.475 "dma_device_id": "system", 00:11:59.475 "dma_device_type": 1 00:11:59.475 }, 00:11:59.475 { 00:11:59.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.475 "dma_device_type": 2 00:11:59.475 }, 00:11:59.475 { 00:11:59.475 "dma_device_id": "system", 00:11:59.475 "dma_device_type": 1 00:11:59.475 }, 00:11:59.475 { 00:11:59.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.475 "dma_device_type": 2 00:11:59.475 }, 00:11:59.475 { 00:11:59.475 "dma_device_id": "system", 00:11:59.475 "dma_device_type": 1 00:11:59.475 }, 00:11:59.475 { 00:11:59.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.475 "dma_device_type": 2 00:11:59.475 } 00:11:59.475 ], 00:11:59.475 "driver_specific": { 00:11:59.475 "raid": { 00:11:59.475 "uuid": "a29b00b8-98d2-476e-8dbc-bcb854a9c9b6", 00:11:59.476 "strip_size_kb": 0, 00:11:59.476 "state": "online", 00:11:59.476 "raid_level": "raid1", 00:11:59.476 "superblock": true, 00:11:59.476 "num_base_bdevs": 4, 00:11:59.476 "num_base_bdevs_discovered": 4, 00:11:59.476 "num_base_bdevs_operational": 4, 00:11:59.476 "base_bdevs_list": [ 00:11:59.476 { 00:11:59.476 "name": "pt1", 00:11:59.476 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:59.476 "is_configured": true, 00:11:59.476 "data_offset": 2048, 00:11:59.476 "data_size": 63488 00:11:59.476 }, 00:11:59.476 { 00:11:59.476 "name": "pt2", 00:11:59.476 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:59.476 "is_configured": true, 00:11:59.476 "data_offset": 2048, 00:11:59.476 "data_size": 63488 00:11:59.476 }, 00:11:59.476 { 00:11:59.476 "name": "pt3", 00:11:59.476 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:59.476 "is_configured": true, 00:11:59.476 "data_offset": 2048, 00:11:59.476 "data_size": 63488 00:11:59.476 }, 00:11:59.476 { 00:11:59.476 "name": "pt4", 00:11:59.476 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:59.476 "is_configured": true, 00:11:59.476 "data_offset": 2048, 00:11:59.476 "data_size": 63488 00:11:59.476 } 00:11:59.476 ] 00:11:59.476 } 00:11:59.476 } 00:11:59.476 }' 00:11:59.476 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:59.735 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:59.735 pt2 00:11:59.735 pt3 00:11:59.735 pt4' 00:11:59.735 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.735 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:59.735 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.735 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.735 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:59.735 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.735 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.735 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.735 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.735 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.735 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.735 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:59.735 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.735 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.735 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.735 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.735 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.735 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.735 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.735 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:59.735 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.735 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.735 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.735 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.735 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.735 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.735 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.735 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:59.735 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.735 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.735 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.735 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.995 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.995 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.995 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:59.995 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.995 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.995 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:59.995 [2024-10-15 09:11:17.644480] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:59.995 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.995 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a29b00b8-98d2-476e-8dbc-bcb854a9c9b6 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a29b00b8-98d2-476e-8dbc-bcb854a9c9b6 ']' 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.996 [2024-10-15 09:11:17.688006] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:59.996 [2024-10-15 09:11:17.688043] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:59.996 [2024-10-15 09:11:17.688157] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:59.996 [2024-10-15 09:11:17.688264] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:59.996 [2024-10-15 09:11:17.688286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.996 [2024-10-15 09:11:17.843790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:59.996 [2024-10-15 09:11:17.845954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:59.996 [2024-10-15 09:11:17.846015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:59.996 [2024-10-15 09:11:17.846053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:59.996 [2024-10-15 09:11:17.846109] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:59.996 [2024-10-15 09:11:17.846172] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:59.996 [2024-10-15 09:11:17.846195] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:59.996 [2024-10-15 09:11:17.846216] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:59.996 [2024-10-15 09:11:17.846231] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:59.996 [2024-10-15 09:11:17.846245] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:59.996 request: 00:11:59.996 { 00:11:59.996 "name": "raid_bdev1", 00:11:59.996 "raid_level": "raid1", 00:11:59.996 "base_bdevs": [ 00:11:59.996 "malloc1", 00:11:59.996 "malloc2", 00:11:59.996 "malloc3", 00:11:59.996 "malloc4" 00:11:59.996 ], 00:11:59.996 "superblock": false, 00:11:59.996 "method": "bdev_raid_create", 00:11:59.996 "req_id": 1 00:11:59.996 } 00:11:59.996 Got JSON-RPC error response 00:11:59.996 response: 00:11:59.996 { 00:11:59.996 "code": -17, 00:11:59.996 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:59.996 } 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.996 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.254 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:00.254 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:00.254 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:00.254 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.254 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.254 [2024-10-15 09:11:17.903654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:00.254 [2024-10-15 09:11:17.903796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.254 [2024-10-15 09:11:17.903835] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:00.254 [2024-10-15 09:11:17.903885] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.254 [2024-10-15 09:11:17.906329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.254 [2024-10-15 09:11:17.906413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:00.254 [2024-10-15 09:11:17.906534] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:00.254 [2024-10-15 09:11:17.906612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:00.254 pt1 00:12:00.254 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.254 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:00.254 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.254 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.254 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.254 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.254 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.254 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.254 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.254 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.254 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.254 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.254 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.254 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.254 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.254 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.254 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.254 "name": "raid_bdev1", 00:12:00.254 "uuid": "a29b00b8-98d2-476e-8dbc-bcb854a9c9b6", 00:12:00.254 "strip_size_kb": 0, 00:12:00.254 "state": "configuring", 00:12:00.254 "raid_level": "raid1", 00:12:00.254 "superblock": true, 00:12:00.254 "num_base_bdevs": 4, 00:12:00.254 "num_base_bdevs_discovered": 1, 00:12:00.254 "num_base_bdevs_operational": 4, 00:12:00.254 "base_bdevs_list": [ 00:12:00.254 { 00:12:00.254 "name": "pt1", 00:12:00.254 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:00.254 "is_configured": true, 00:12:00.254 "data_offset": 2048, 00:12:00.254 "data_size": 63488 00:12:00.254 }, 00:12:00.254 { 00:12:00.254 "name": null, 00:12:00.254 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:00.254 "is_configured": false, 00:12:00.254 "data_offset": 2048, 00:12:00.254 "data_size": 63488 00:12:00.254 }, 00:12:00.254 { 00:12:00.254 "name": null, 00:12:00.254 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:00.254 "is_configured": false, 00:12:00.254 "data_offset": 2048, 00:12:00.254 "data_size": 63488 00:12:00.254 }, 00:12:00.254 { 00:12:00.254 "name": null, 00:12:00.254 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:00.254 "is_configured": false, 00:12:00.254 "data_offset": 2048, 00:12:00.254 "data_size": 63488 00:12:00.254 } 00:12:00.254 ] 00:12:00.254 }' 00:12:00.254 09:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.254 09:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.513 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:00.513 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:00.513 09:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.513 09:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.513 [2024-10-15 09:11:18.358882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:00.513 [2024-10-15 09:11:18.358962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.513 [2024-10-15 09:11:18.358985] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:00.513 [2024-10-15 09:11:18.358998] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.513 [2024-10-15 09:11:18.359529] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.513 [2024-10-15 09:11:18.359579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:00.513 [2024-10-15 09:11:18.359674] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:00.513 [2024-10-15 09:11:18.359724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:00.513 pt2 00:12:00.513 09:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.513 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:00.513 09:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.513 09:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.513 [2024-10-15 09:11:18.370931] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:00.513 09:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.514 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:00.514 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.514 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.514 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.514 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.514 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.514 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.514 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.514 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.514 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.514 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.514 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.514 09:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.514 09:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.514 09:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.772 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.772 "name": "raid_bdev1", 00:12:00.772 "uuid": "a29b00b8-98d2-476e-8dbc-bcb854a9c9b6", 00:12:00.772 "strip_size_kb": 0, 00:12:00.772 "state": "configuring", 00:12:00.772 "raid_level": "raid1", 00:12:00.772 "superblock": true, 00:12:00.772 "num_base_bdevs": 4, 00:12:00.772 "num_base_bdevs_discovered": 1, 00:12:00.772 "num_base_bdevs_operational": 4, 00:12:00.772 "base_bdevs_list": [ 00:12:00.772 { 00:12:00.772 "name": "pt1", 00:12:00.772 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:00.772 "is_configured": true, 00:12:00.772 "data_offset": 2048, 00:12:00.772 "data_size": 63488 00:12:00.772 }, 00:12:00.772 { 00:12:00.772 "name": null, 00:12:00.772 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:00.772 "is_configured": false, 00:12:00.772 "data_offset": 0, 00:12:00.772 "data_size": 63488 00:12:00.772 }, 00:12:00.772 { 00:12:00.772 "name": null, 00:12:00.772 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:00.772 "is_configured": false, 00:12:00.772 "data_offset": 2048, 00:12:00.772 "data_size": 63488 00:12:00.772 }, 00:12:00.772 { 00:12:00.772 "name": null, 00:12:00.772 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:00.772 "is_configured": false, 00:12:00.772 "data_offset": 2048, 00:12:00.772 "data_size": 63488 00:12:00.772 } 00:12:00.772 ] 00:12:00.772 }' 00:12:00.772 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.772 09:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.032 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:01.032 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:01.032 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:01.032 09:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.032 09:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.032 [2024-10-15 09:11:18.850090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:01.032 [2024-10-15 09:11:18.850222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.032 [2024-10-15 09:11:18.850277] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:01.032 [2024-10-15 09:11:18.850313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.032 [2024-10-15 09:11:18.850827] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.032 [2024-10-15 09:11:18.850852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:01.032 [2024-10-15 09:11:18.850945] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:01.032 [2024-10-15 09:11:18.851011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:01.032 pt2 00:12:01.032 09:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.032 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:01.032 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:01.032 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:01.032 09:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.032 09:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.032 [2024-10-15 09:11:18.862048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:01.032 [2024-10-15 09:11:18.862153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.032 [2024-10-15 09:11:18.862182] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:01.032 [2024-10-15 09:11:18.862193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.032 [2024-10-15 09:11:18.862638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.032 [2024-10-15 09:11:18.862663] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:01.032 [2024-10-15 09:11:18.862757] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:01.032 [2024-10-15 09:11:18.862782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:01.032 pt3 00:12:01.032 09:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.032 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:01.032 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:01.032 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:01.032 09:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.032 09:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.032 [2024-10-15 09:11:18.869989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:01.032 [2024-10-15 09:11:18.870037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.032 [2024-10-15 09:11:18.870056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:01.032 [2024-10-15 09:11:18.870066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.032 [2024-10-15 09:11:18.870494] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.032 [2024-10-15 09:11:18.870522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:01.032 [2024-10-15 09:11:18.870594] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:01.032 [2024-10-15 09:11:18.870613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:01.033 [2024-10-15 09:11:18.870767] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:01.033 [2024-10-15 09:11:18.870780] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:01.033 [2024-10-15 09:11:18.871051] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:01.033 [2024-10-15 09:11:18.871202] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:01.033 [2024-10-15 09:11:18.871214] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:01.033 [2024-10-15 09:11:18.871351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.033 pt4 00:12:01.033 09:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.033 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:01.033 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:01.033 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:01.033 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.033 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.033 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.033 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.033 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.033 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.033 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.033 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.033 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.033 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.033 09:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.033 09:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.033 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.033 09:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.293 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.293 "name": "raid_bdev1", 00:12:01.293 "uuid": "a29b00b8-98d2-476e-8dbc-bcb854a9c9b6", 00:12:01.293 "strip_size_kb": 0, 00:12:01.293 "state": "online", 00:12:01.293 "raid_level": "raid1", 00:12:01.293 "superblock": true, 00:12:01.293 "num_base_bdevs": 4, 00:12:01.293 "num_base_bdevs_discovered": 4, 00:12:01.293 "num_base_bdevs_operational": 4, 00:12:01.293 "base_bdevs_list": [ 00:12:01.293 { 00:12:01.293 "name": "pt1", 00:12:01.293 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:01.293 "is_configured": true, 00:12:01.293 "data_offset": 2048, 00:12:01.293 "data_size": 63488 00:12:01.293 }, 00:12:01.293 { 00:12:01.293 "name": "pt2", 00:12:01.293 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:01.293 "is_configured": true, 00:12:01.293 "data_offset": 2048, 00:12:01.293 "data_size": 63488 00:12:01.293 }, 00:12:01.293 { 00:12:01.293 "name": "pt3", 00:12:01.293 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:01.293 "is_configured": true, 00:12:01.293 "data_offset": 2048, 00:12:01.293 "data_size": 63488 00:12:01.293 }, 00:12:01.293 { 00:12:01.293 "name": "pt4", 00:12:01.293 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:01.293 "is_configured": true, 00:12:01.293 "data_offset": 2048, 00:12:01.293 "data_size": 63488 00:12:01.293 } 00:12:01.293 ] 00:12:01.293 }' 00:12:01.293 09:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.293 09:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.553 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:01.553 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:01.553 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:01.554 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:01.554 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:01.554 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:01.554 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:01.554 09:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.554 09:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.554 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:01.554 [2024-10-15 09:11:19.365592] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:01.554 09:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.554 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:01.554 "name": "raid_bdev1", 00:12:01.554 "aliases": [ 00:12:01.554 "a29b00b8-98d2-476e-8dbc-bcb854a9c9b6" 00:12:01.554 ], 00:12:01.554 "product_name": "Raid Volume", 00:12:01.554 "block_size": 512, 00:12:01.554 "num_blocks": 63488, 00:12:01.554 "uuid": "a29b00b8-98d2-476e-8dbc-bcb854a9c9b6", 00:12:01.554 "assigned_rate_limits": { 00:12:01.554 "rw_ios_per_sec": 0, 00:12:01.554 "rw_mbytes_per_sec": 0, 00:12:01.554 "r_mbytes_per_sec": 0, 00:12:01.554 "w_mbytes_per_sec": 0 00:12:01.554 }, 00:12:01.554 "claimed": false, 00:12:01.554 "zoned": false, 00:12:01.554 "supported_io_types": { 00:12:01.554 "read": true, 00:12:01.554 "write": true, 00:12:01.554 "unmap": false, 00:12:01.554 "flush": false, 00:12:01.554 "reset": true, 00:12:01.554 "nvme_admin": false, 00:12:01.554 "nvme_io": false, 00:12:01.554 "nvme_io_md": false, 00:12:01.554 "write_zeroes": true, 00:12:01.554 "zcopy": false, 00:12:01.554 "get_zone_info": false, 00:12:01.554 "zone_management": false, 00:12:01.554 "zone_append": false, 00:12:01.554 "compare": false, 00:12:01.554 "compare_and_write": false, 00:12:01.554 "abort": false, 00:12:01.554 "seek_hole": false, 00:12:01.554 "seek_data": false, 00:12:01.554 "copy": false, 00:12:01.554 "nvme_iov_md": false 00:12:01.554 }, 00:12:01.554 "memory_domains": [ 00:12:01.554 { 00:12:01.554 "dma_device_id": "system", 00:12:01.554 "dma_device_type": 1 00:12:01.554 }, 00:12:01.554 { 00:12:01.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.554 "dma_device_type": 2 00:12:01.554 }, 00:12:01.554 { 00:12:01.554 "dma_device_id": "system", 00:12:01.554 "dma_device_type": 1 00:12:01.554 }, 00:12:01.554 { 00:12:01.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.554 "dma_device_type": 2 00:12:01.554 }, 00:12:01.554 { 00:12:01.554 "dma_device_id": "system", 00:12:01.554 "dma_device_type": 1 00:12:01.554 }, 00:12:01.554 { 00:12:01.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.554 "dma_device_type": 2 00:12:01.554 }, 00:12:01.554 { 00:12:01.554 "dma_device_id": "system", 00:12:01.554 "dma_device_type": 1 00:12:01.554 }, 00:12:01.554 { 00:12:01.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.554 "dma_device_type": 2 00:12:01.554 } 00:12:01.554 ], 00:12:01.554 "driver_specific": { 00:12:01.554 "raid": { 00:12:01.554 "uuid": "a29b00b8-98d2-476e-8dbc-bcb854a9c9b6", 00:12:01.554 "strip_size_kb": 0, 00:12:01.554 "state": "online", 00:12:01.554 "raid_level": "raid1", 00:12:01.554 "superblock": true, 00:12:01.554 "num_base_bdevs": 4, 00:12:01.554 "num_base_bdevs_discovered": 4, 00:12:01.554 "num_base_bdevs_operational": 4, 00:12:01.554 "base_bdevs_list": [ 00:12:01.554 { 00:12:01.554 "name": "pt1", 00:12:01.554 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:01.554 "is_configured": true, 00:12:01.554 "data_offset": 2048, 00:12:01.554 "data_size": 63488 00:12:01.554 }, 00:12:01.554 { 00:12:01.554 "name": "pt2", 00:12:01.554 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:01.554 "is_configured": true, 00:12:01.554 "data_offset": 2048, 00:12:01.554 "data_size": 63488 00:12:01.554 }, 00:12:01.554 { 00:12:01.554 "name": "pt3", 00:12:01.554 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:01.554 "is_configured": true, 00:12:01.554 "data_offset": 2048, 00:12:01.554 "data_size": 63488 00:12:01.554 }, 00:12:01.554 { 00:12:01.554 "name": "pt4", 00:12:01.554 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:01.554 "is_configured": true, 00:12:01.554 "data_offset": 2048, 00:12:01.554 "data_size": 63488 00:12:01.554 } 00:12:01.554 ] 00:12:01.554 } 00:12:01.554 } 00:12:01.554 }' 00:12:01.554 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:01.554 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:01.554 pt2 00:12:01.554 pt3 00:12:01.554 pt4' 00:12:01.554 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:01.838 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:01.838 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:01.838 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:01.838 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:01.838 09:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.838 09:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.838 09:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.838 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:01.838 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:01.839 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:01.839 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:01.839 09:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.839 09:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.839 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:01.839 09:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.839 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:01.839 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:01.839 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:01.839 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:01.839 09:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.839 09:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.839 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:01.839 09:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.839 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:01.839 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:01.839 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:01.839 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:01.839 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:01.839 09:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.839 09:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.839 09:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.839 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:01.839 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:01.839 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:01.839 09:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.839 09:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.839 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:01.839 [2024-10-15 09:11:19.701080] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:01.839 09:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.104 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a29b00b8-98d2-476e-8dbc-bcb854a9c9b6 '!=' a29b00b8-98d2-476e-8dbc-bcb854a9c9b6 ']' 00:12:02.104 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:02.104 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:02.104 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:02.104 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:02.104 09:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.104 09:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.104 [2024-10-15 09:11:19.748675] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:02.104 09:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.104 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:02.104 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.104 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.104 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.104 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.104 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:02.104 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.104 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.104 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.104 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.104 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.104 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.104 09:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.104 09:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.104 09:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.104 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.104 "name": "raid_bdev1", 00:12:02.104 "uuid": "a29b00b8-98d2-476e-8dbc-bcb854a9c9b6", 00:12:02.104 "strip_size_kb": 0, 00:12:02.104 "state": "online", 00:12:02.104 "raid_level": "raid1", 00:12:02.104 "superblock": true, 00:12:02.104 "num_base_bdevs": 4, 00:12:02.104 "num_base_bdevs_discovered": 3, 00:12:02.104 "num_base_bdevs_operational": 3, 00:12:02.104 "base_bdevs_list": [ 00:12:02.104 { 00:12:02.104 "name": null, 00:12:02.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.104 "is_configured": false, 00:12:02.104 "data_offset": 0, 00:12:02.104 "data_size": 63488 00:12:02.104 }, 00:12:02.104 { 00:12:02.104 "name": "pt2", 00:12:02.104 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:02.104 "is_configured": true, 00:12:02.104 "data_offset": 2048, 00:12:02.104 "data_size": 63488 00:12:02.104 }, 00:12:02.104 { 00:12:02.104 "name": "pt3", 00:12:02.104 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:02.104 "is_configured": true, 00:12:02.104 "data_offset": 2048, 00:12:02.104 "data_size": 63488 00:12:02.104 }, 00:12:02.104 { 00:12:02.104 "name": "pt4", 00:12:02.104 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:02.104 "is_configured": true, 00:12:02.104 "data_offset": 2048, 00:12:02.104 "data_size": 63488 00:12:02.104 } 00:12:02.104 ] 00:12:02.104 }' 00:12:02.104 09:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.104 09:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.365 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:02.365 09:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.365 09:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.365 [2024-10-15 09:11:20.195832] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:02.365 [2024-10-15 09:11:20.195920] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:02.365 [2024-10-15 09:11:20.196032] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:02.365 [2024-10-15 09:11:20.196129] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:02.365 [2024-10-15 09:11:20.196178] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:02.365 09:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.365 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:02.365 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.365 09:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.365 09:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.365 09:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.365 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:02.365 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:02.365 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:02.365 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:02.365 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:02.365 09:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.365 09:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.626 [2024-10-15 09:11:20.287658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:02.626 [2024-10-15 09:11:20.287733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.626 [2024-10-15 09:11:20.287755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:02.626 [2024-10-15 09:11:20.287764] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.626 [2024-10-15 09:11:20.290182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.626 [2024-10-15 09:11:20.290224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:02.626 [2024-10-15 09:11:20.290310] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:02.626 [2024-10-15 09:11:20.290363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:02.626 pt2 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.626 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.626 "name": "raid_bdev1", 00:12:02.626 "uuid": "a29b00b8-98d2-476e-8dbc-bcb854a9c9b6", 00:12:02.626 "strip_size_kb": 0, 00:12:02.626 "state": "configuring", 00:12:02.626 "raid_level": "raid1", 00:12:02.626 "superblock": true, 00:12:02.626 "num_base_bdevs": 4, 00:12:02.626 "num_base_bdevs_discovered": 1, 00:12:02.626 "num_base_bdevs_operational": 3, 00:12:02.626 "base_bdevs_list": [ 00:12:02.626 { 00:12:02.626 "name": null, 00:12:02.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.626 "is_configured": false, 00:12:02.626 "data_offset": 2048, 00:12:02.626 "data_size": 63488 00:12:02.626 }, 00:12:02.626 { 00:12:02.626 "name": "pt2", 00:12:02.626 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:02.626 "is_configured": true, 00:12:02.626 "data_offset": 2048, 00:12:02.626 "data_size": 63488 00:12:02.626 }, 00:12:02.626 { 00:12:02.626 "name": null, 00:12:02.626 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:02.626 "is_configured": false, 00:12:02.626 "data_offset": 2048, 00:12:02.626 "data_size": 63488 00:12:02.626 }, 00:12:02.626 { 00:12:02.626 "name": null, 00:12:02.626 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:02.626 "is_configured": false, 00:12:02.626 "data_offset": 2048, 00:12:02.626 "data_size": 63488 00:12:02.626 } 00:12:02.626 ] 00:12:02.627 }' 00:12:02.627 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.627 09:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.886 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:02.886 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:02.886 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:02.886 09:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.886 09:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.886 [2024-10-15 09:11:20.754937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:02.887 [2024-10-15 09:11:20.755098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.887 [2024-10-15 09:11:20.755144] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:02.887 [2024-10-15 09:11:20.755177] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.887 [2024-10-15 09:11:20.755733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.887 [2024-10-15 09:11:20.755798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:02.887 [2024-10-15 09:11:20.755933] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:02.887 [2024-10-15 09:11:20.755991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:02.887 pt3 00:12:02.887 09:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.887 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:02.887 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.887 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.887 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.887 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.887 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:02.887 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.887 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.887 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.887 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.887 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.887 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.887 09:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.887 09:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.146 09:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.146 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.146 "name": "raid_bdev1", 00:12:03.146 "uuid": "a29b00b8-98d2-476e-8dbc-bcb854a9c9b6", 00:12:03.146 "strip_size_kb": 0, 00:12:03.146 "state": "configuring", 00:12:03.146 "raid_level": "raid1", 00:12:03.146 "superblock": true, 00:12:03.146 "num_base_bdevs": 4, 00:12:03.146 "num_base_bdevs_discovered": 2, 00:12:03.146 "num_base_bdevs_operational": 3, 00:12:03.146 "base_bdevs_list": [ 00:12:03.146 { 00:12:03.146 "name": null, 00:12:03.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.146 "is_configured": false, 00:12:03.146 "data_offset": 2048, 00:12:03.146 "data_size": 63488 00:12:03.146 }, 00:12:03.146 { 00:12:03.146 "name": "pt2", 00:12:03.146 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:03.146 "is_configured": true, 00:12:03.146 "data_offset": 2048, 00:12:03.146 "data_size": 63488 00:12:03.146 }, 00:12:03.146 { 00:12:03.146 "name": "pt3", 00:12:03.146 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:03.146 "is_configured": true, 00:12:03.146 "data_offset": 2048, 00:12:03.146 "data_size": 63488 00:12:03.146 }, 00:12:03.146 { 00:12:03.146 "name": null, 00:12:03.146 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:03.146 "is_configured": false, 00:12:03.146 "data_offset": 2048, 00:12:03.146 "data_size": 63488 00:12:03.146 } 00:12:03.146 ] 00:12:03.146 }' 00:12:03.146 09:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.146 09:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.716 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:03.716 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:03.716 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:03.716 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:03.716 09:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.716 09:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.716 [2024-10-15 09:11:21.318012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:03.716 [2024-10-15 09:11:21.318101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.716 [2024-10-15 09:11:21.318128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:03.716 [2024-10-15 09:11:21.318138] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.716 [2024-10-15 09:11:21.318624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.716 [2024-10-15 09:11:21.318642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:03.716 [2024-10-15 09:11:21.318752] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:03.716 [2024-10-15 09:11:21.318786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:03.716 [2024-10-15 09:11:21.318940] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:03.716 [2024-10-15 09:11:21.318949] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:03.716 [2024-10-15 09:11:21.319197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:03.716 [2024-10-15 09:11:21.319354] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:03.716 [2024-10-15 09:11:21.319367] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:03.716 [2024-10-15 09:11:21.319528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.716 pt4 00:12:03.716 09:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.716 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:03.716 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.716 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.716 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.716 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.716 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.716 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.716 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.716 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.716 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.716 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.716 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.716 09:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.716 09:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.716 09:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.716 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.716 "name": "raid_bdev1", 00:12:03.716 "uuid": "a29b00b8-98d2-476e-8dbc-bcb854a9c9b6", 00:12:03.716 "strip_size_kb": 0, 00:12:03.716 "state": "online", 00:12:03.716 "raid_level": "raid1", 00:12:03.716 "superblock": true, 00:12:03.716 "num_base_bdevs": 4, 00:12:03.716 "num_base_bdevs_discovered": 3, 00:12:03.716 "num_base_bdevs_operational": 3, 00:12:03.716 "base_bdevs_list": [ 00:12:03.716 { 00:12:03.716 "name": null, 00:12:03.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.716 "is_configured": false, 00:12:03.716 "data_offset": 2048, 00:12:03.716 "data_size": 63488 00:12:03.716 }, 00:12:03.716 { 00:12:03.716 "name": "pt2", 00:12:03.716 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:03.716 "is_configured": true, 00:12:03.716 "data_offset": 2048, 00:12:03.716 "data_size": 63488 00:12:03.716 }, 00:12:03.716 { 00:12:03.716 "name": "pt3", 00:12:03.716 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:03.716 "is_configured": true, 00:12:03.716 "data_offset": 2048, 00:12:03.716 "data_size": 63488 00:12:03.716 }, 00:12:03.717 { 00:12:03.717 "name": "pt4", 00:12:03.717 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:03.717 "is_configured": true, 00:12:03.717 "data_offset": 2048, 00:12:03.717 "data_size": 63488 00:12:03.717 } 00:12:03.717 ] 00:12:03.717 }' 00:12:03.717 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.717 09:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.976 [2024-10-15 09:11:21.769214] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:03.976 [2024-10-15 09:11:21.769314] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:03.976 [2024-10-15 09:11:21.769435] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:03.976 [2024-10-15 09:11:21.769541] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:03.976 [2024-10-15 09:11:21.769599] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.976 [2024-10-15 09:11:21.849088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:03.976 [2024-10-15 09:11:21.849168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.976 [2024-10-15 09:11:21.849192] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:03.976 [2024-10-15 09:11:21.849206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.976 [2024-10-15 09:11:21.851902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.976 [2024-10-15 09:11:21.851949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:03.976 [2024-10-15 09:11:21.852058] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:03.976 [2024-10-15 09:11:21.852113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:03.976 [2024-10-15 09:11:21.852265] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:03.976 [2024-10-15 09:11:21.852280] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:03.976 [2024-10-15 09:11:21.852302] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:03.976 [2024-10-15 09:11:21.852386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:03.976 [2024-10-15 09:11:21.852518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:03.976 pt1 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.976 09:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.235 09:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.235 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.235 "name": "raid_bdev1", 00:12:04.235 "uuid": "a29b00b8-98d2-476e-8dbc-bcb854a9c9b6", 00:12:04.235 "strip_size_kb": 0, 00:12:04.235 "state": "configuring", 00:12:04.235 "raid_level": "raid1", 00:12:04.235 "superblock": true, 00:12:04.235 "num_base_bdevs": 4, 00:12:04.235 "num_base_bdevs_discovered": 2, 00:12:04.235 "num_base_bdevs_operational": 3, 00:12:04.235 "base_bdevs_list": [ 00:12:04.235 { 00:12:04.235 "name": null, 00:12:04.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.235 "is_configured": false, 00:12:04.235 "data_offset": 2048, 00:12:04.235 "data_size": 63488 00:12:04.235 }, 00:12:04.235 { 00:12:04.235 "name": "pt2", 00:12:04.235 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:04.235 "is_configured": true, 00:12:04.235 "data_offset": 2048, 00:12:04.235 "data_size": 63488 00:12:04.235 }, 00:12:04.235 { 00:12:04.235 "name": "pt3", 00:12:04.235 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:04.235 "is_configured": true, 00:12:04.235 "data_offset": 2048, 00:12:04.235 "data_size": 63488 00:12:04.235 }, 00:12:04.235 { 00:12:04.235 "name": null, 00:12:04.235 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:04.235 "is_configured": false, 00:12:04.235 "data_offset": 2048, 00:12:04.235 "data_size": 63488 00:12:04.235 } 00:12:04.235 ] 00:12:04.235 }' 00:12:04.235 09:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.235 09:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.494 09:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:04.494 09:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:04.494 09:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.494 09:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.494 09:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.752 09:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:04.753 09:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:04.753 09:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.753 09:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.753 [2024-10-15 09:11:22.420208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:04.753 [2024-10-15 09:11:22.420284] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.753 [2024-10-15 09:11:22.420311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:04.753 [2024-10-15 09:11:22.420323] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.753 [2024-10-15 09:11:22.420889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.753 [2024-10-15 09:11:22.420912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:04.753 [2024-10-15 09:11:22.421018] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:04.753 [2024-10-15 09:11:22.421046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:04.753 [2024-10-15 09:11:22.421194] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:04.753 [2024-10-15 09:11:22.421204] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:04.753 [2024-10-15 09:11:22.421494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:04.753 [2024-10-15 09:11:22.421668] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:04.753 [2024-10-15 09:11:22.421682] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:04.753 [2024-10-15 09:11:22.421860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.753 pt4 00:12:04.753 09:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.753 09:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:04.753 09:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.753 09:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.753 09:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.753 09:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.753 09:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.753 09:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.753 09:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.753 09:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.753 09:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.753 09:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.753 09:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.753 09:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.753 09:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.753 09:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.753 09:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.753 "name": "raid_bdev1", 00:12:04.753 "uuid": "a29b00b8-98d2-476e-8dbc-bcb854a9c9b6", 00:12:04.753 "strip_size_kb": 0, 00:12:04.753 "state": "online", 00:12:04.753 "raid_level": "raid1", 00:12:04.753 "superblock": true, 00:12:04.753 "num_base_bdevs": 4, 00:12:04.753 "num_base_bdevs_discovered": 3, 00:12:04.753 "num_base_bdevs_operational": 3, 00:12:04.753 "base_bdevs_list": [ 00:12:04.753 { 00:12:04.753 "name": null, 00:12:04.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.753 "is_configured": false, 00:12:04.753 "data_offset": 2048, 00:12:04.753 "data_size": 63488 00:12:04.753 }, 00:12:04.753 { 00:12:04.753 "name": "pt2", 00:12:04.753 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:04.753 "is_configured": true, 00:12:04.753 "data_offset": 2048, 00:12:04.753 "data_size": 63488 00:12:04.753 }, 00:12:04.753 { 00:12:04.753 "name": "pt3", 00:12:04.753 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:04.753 "is_configured": true, 00:12:04.753 "data_offset": 2048, 00:12:04.753 "data_size": 63488 00:12:04.753 }, 00:12:04.753 { 00:12:04.753 "name": "pt4", 00:12:04.753 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:04.753 "is_configured": true, 00:12:04.753 "data_offset": 2048, 00:12:04.753 "data_size": 63488 00:12:04.753 } 00:12:04.753 ] 00:12:04.753 }' 00:12:04.753 09:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.753 09:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.012 09:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:05.012 09:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:05.012 09:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.012 09:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.012 09:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.277 09:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:05.277 09:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:05.277 09:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.277 09:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.277 09:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:05.277 [2024-10-15 09:11:22.927783] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.277 09:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.277 09:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' a29b00b8-98d2-476e-8dbc-bcb854a9c9b6 '!=' a29b00b8-98d2-476e-8dbc-bcb854a9c9b6 ']' 00:12:05.277 09:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74640 00:12:05.277 09:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 74640 ']' 00:12:05.277 09:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 74640 00:12:05.277 09:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:12:05.277 09:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:05.277 09:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74640 00:12:05.277 09:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:05.277 09:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:05.277 09:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74640' 00:12:05.277 killing process with pid 74640 00:12:05.277 09:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 74640 00:12:05.277 [2024-10-15 09:11:23.015071] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:05.277 [2024-10-15 09:11:23.015235] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:05.277 09:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 74640 00:12:05.277 [2024-10-15 09:11:23.015351] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:05.277 [2024-10-15 09:11:23.015371] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:05.844 [2024-10-15 09:11:23.496221] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:07.222 09:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:07.222 ************************************ 00:12:07.222 END TEST raid_superblock_test 00:12:07.222 ************************************ 00:12:07.222 00:12:07.222 real 0m9.258s 00:12:07.222 user 0m14.346s 00:12:07.222 sys 0m1.740s 00:12:07.222 09:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:07.222 09:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.222 09:11:24 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:12:07.222 09:11:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:07.222 09:11:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:07.222 09:11:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:07.222 ************************************ 00:12:07.222 START TEST raid_read_error_test 00:12:07.222 ************************************ 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pitSmZbp91 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75139 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75139 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 75139 ']' 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:07.222 09:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.222 [2024-10-15 09:11:25.016284] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:12:07.222 [2024-10-15 09:11:25.016520] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75139 ] 00:12:07.481 [2024-10-15 09:11:25.184229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.481 [2024-10-15 09:11:25.319248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.741 [2024-10-15 09:11:25.558008] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:07.741 [2024-10-15 09:11:25.558182] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.311 09:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:08.311 09:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:08.311 09:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:08.311 09:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:08.311 09:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.311 09:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.311 BaseBdev1_malloc 00:12:08.311 09:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.311 09:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:08.311 09:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.311 09:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.311 true 00:12:08.311 09:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.311 09:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:08.311 09:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.311 09:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.311 [2024-10-15 09:11:25.995536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:08.311 [2024-10-15 09:11:25.995658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.311 [2024-10-15 09:11:25.995695] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:08.311 [2024-10-15 09:11:25.995710] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.311 [2024-10-15 09:11:25.998197] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.311 [2024-10-15 09:11:25.998244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:08.311 BaseBdev1 00:12:08.311 09:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.311 09:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:08.311 09:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:08.311 09:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.311 09:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.311 BaseBdev2_malloc 00:12:08.311 09:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.311 09:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:08.311 09:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.311 09:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.311 true 00:12:08.311 09:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.311 09:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:08.311 09:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.311 09:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.311 [2024-10-15 09:11:26.068724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:08.311 [2024-10-15 09:11:26.068795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.311 [2024-10-15 09:11:26.068815] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:08.311 [2024-10-15 09:11:26.068828] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.311 [2024-10-15 09:11:26.071313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.311 [2024-10-15 09:11:26.071361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:08.311 BaseBdev2 00:12:08.311 09:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.311 09:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:08.311 09:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:08.311 09:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.311 09:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.311 BaseBdev3_malloc 00:12:08.311 09:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.311 09:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:08.311 09:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.311 09:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.311 true 00:12:08.311 09:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.311 09:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:08.311 09:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.311 09:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.311 [2024-10-15 09:11:26.156035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:08.311 [2024-10-15 09:11:26.156101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.311 [2024-10-15 09:11:26.156124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:08.311 [2024-10-15 09:11:26.156137] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.311 [2024-10-15 09:11:26.158641] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.311 [2024-10-15 09:11:26.158712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:08.311 BaseBdev3 00:12:08.311 09:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.311 09:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:08.311 09:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:08.311 09:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.311 09:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.571 BaseBdev4_malloc 00:12:08.571 09:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.571 09:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:08.571 09:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.571 09:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.571 true 00:12:08.571 09:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.571 09:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:08.571 09:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.571 09:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.571 [2024-10-15 09:11:26.230088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:08.571 [2024-10-15 09:11:26.230236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.571 [2024-10-15 09:11:26.230269] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:08.571 [2024-10-15 09:11:26.230285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.571 [2024-10-15 09:11:26.233002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.571 [2024-10-15 09:11:26.233060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:08.571 BaseBdev4 00:12:08.571 09:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.571 09:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:08.571 09:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.571 09:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.571 [2024-10-15 09:11:26.242134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:08.571 [2024-10-15 09:11:26.244333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:08.571 [2024-10-15 09:11:26.244514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:08.571 [2024-10-15 09:11:26.244601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:08.571 [2024-10-15 09:11:26.244921] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:08.571 [2024-10-15 09:11:26.244940] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:08.571 [2024-10-15 09:11:26.245262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:08.571 [2024-10-15 09:11:26.245457] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:08.571 [2024-10-15 09:11:26.245468] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:08.571 [2024-10-15 09:11:26.245713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.571 09:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.571 09:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:08.571 09:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.571 09:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.571 09:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.571 09:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.571 09:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.571 09:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.571 09:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.571 09:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.571 09:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.571 09:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.571 09:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.571 09:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.571 09:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.571 09:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.572 09:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.572 "name": "raid_bdev1", 00:12:08.572 "uuid": "fd83c17e-121a-418d-94e8-fdb3ca77fc48", 00:12:08.572 "strip_size_kb": 0, 00:12:08.572 "state": "online", 00:12:08.572 "raid_level": "raid1", 00:12:08.572 "superblock": true, 00:12:08.572 "num_base_bdevs": 4, 00:12:08.572 "num_base_bdevs_discovered": 4, 00:12:08.572 "num_base_bdevs_operational": 4, 00:12:08.572 "base_bdevs_list": [ 00:12:08.572 { 00:12:08.572 "name": "BaseBdev1", 00:12:08.572 "uuid": "42e8eba7-fad1-5525-bf4e-f0f0dc2c2480", 00:12:08.572 "is_configured": true, 00:12:08.572 "data_offset": 2048, 00:12:08.572 "data_size": 63488 00:12:08.572 }, 00:12:08.572 { 00:12:08.572 "name": "BaseBdev2", 00:12:08.572 "uuid": "82e08a0f-1b13-55dc-86f6-90fd311a3a1f", 00:12:08.572 "is_configured": true, 00:12:08.572 "data_offset": 2048, 00:12:08.572 "data_size": 63488 00:12:08.572 }, 00:12:08.572 { 00:12:08.572 "name": "BaseBdev3", 00:12:08.572 "uuid": "a39efe75-68c6-5d8e-a8fe-451f1efbc381", 00:12:08.572 "is_configured": true, 00:12:08.572 "data_offset": 2048, 00:12:08.572 "data_size": 63488 00:12:08.572 }, 00:12:08.572 { 00:12:08.572 "name": "BaseBdev4", 00:12:08.572 "uuid": "aff73872-61f4-5a1b-8180-5832df49f35e", 00:12:08.572 "is_configured": true, 00:12:08.572 "data_offset": 2048, 00:12:08.572 "data_size": 63488 00:12:08.572 } 00:12:08.572 ] 00:12:08.572 }' 00:12:08.572 09:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.572 09:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.830 09:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:08.830 09:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:09.089 [2024-10-15 09:11:26.830623] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:10.091 09:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:10.091 09:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.091 09:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.091 09:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.091 09:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:10.091 09:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:10.091 09:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:10.091 09:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:10.091 09:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:10.091 09:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.091 09:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.091 09:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.091 09:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.091 09:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.091 09:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.091 09:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.091 09:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.091 09:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.091 09:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.091 09:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.091 09:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.091 09:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.091 09:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.091 09:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.091 "name": "raid_bdev1", 00:12:10.091 "uuid": "fd83c17e-121a-418d-94e8-fdb3ca77fc48", 00:12:10.091 "strip_size_kb": 0, 00:12:10.091 "state": "online", 00:12:10.091 "raid_level": "raid1", 00:12:10.091 "superblock": true, 00:12:10.091 "num_base_bdevs": 4, 00:12:10.091 "num_base_bdevs_discovered": 4, 00:12:10.091 "num_base_bdevs_operational": 4, 00:12:10.091 "base_bdevs_list": [ 00:12:10.091 { 00:12:10.091 "name": "BaseBdev1", 00:12:10.091 "uuid": "42e8eba7-fad1-5525-bf4e-f0f0dc2c2480", 00:12:10.091 "is_configured": true, 00:12:10.091 "data_offset": 2048, 00:12:10.091 "data_size": 63488 00:12:10.091 }, 00:12:10.091 { 00:12:10.091 "name": "BaseBdev2", 00:12:10.091 "uuid": "82e08a0f-1b13-55dc-86f6-90fd311a3a1f", 00:12:10.091 "is_configured": true, 00:12:10.091 "data_offset": 2048, 00:12:10.091 "data_size": 63488 00:12:10.091 }, 00:12:10.091 { 00:12:10.091 "name": "BaseBdev3", 00:12:10.091 "uuid": "a39efe75-68c6-5d8e-a8fe-451f1efbc381", 00:12:10.091 "is_configured": true, 00:12:10.091 "data_offset": 2048, 00:12:10.091 "data_size": 63488 00:12:10.091 }, 00:12:10.091 { 00:12:10.091 "name": "BaseBdev4", 00:12:10.091 "uuid": "aff73872-61f4-5a1b-8180-5832df49f35e", 00:12:10.091 "is_configured": true, 00:12:10.091 "data_offset": 2048, 00:12:10.091 "data_size": 63488 00:12:10.091 } 00:12:10.091 ] 00:12:10.091 }' 00:12:10.091 09:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.091 09:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.350 09:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:10.350 09:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.350 09:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.350 [2024-10-15 09:11:28.163460] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:10.350 [2024-10-15 09:11:28.163600] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:10.350 [2024-10-15 09:11:28.166629] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:10.350 [2024-10-15 09:11:28.166739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.350 [2024-10-15 09:11:28.166891] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:10.350 [2024-10-15 09:11:28.166941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:10.350 { 00:12:10.350 "results": [ 00:12:10.350 { 00:12:10.350 "job": "raid_bdev1", 00:12:10.350 "core_mask": "0x1", 00:12:10.350 "workload": "randrw", 00:12:10.350 "percentage": 50, 00:12:10.350 "status": "finished", 00:12:10.350 "queue_depth": 1, 00:12:10.350 "io_size": 131072, 00:12:10.350 "runtime": 1.333294, 00:12:10.350 "iops": 9790.038806144781, 00:12:10.350 "mibps": 1223.7548507680976, 00:12:10.350 "io_failed": 0, 00:12:10.350 "io_timeout": 0, 00:12:10.350 "avg_latency_us": 99.27192443839142, 00:12:10.350 "min_latency_us": 25.3764192139738, 00:12:10.350 "max_latency_us": 1566.8541484716156 00:12:10.350 } 00:12:10.350 ], 00:12:10.350 "core_count": 1 00:12:10.350 } 00:12:10.350 09:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.350 09:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75139 00:12:10.350 09:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 75139 ']' 00:12:10.350 09:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 75139 00:12:10.350 09:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:12:10.350 09:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:10.350 09:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75139 00:12:10.350 09:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:10.350 09:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:10.350 09:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75139' 00:12:10.350 killing process with pid 75139 00:12:10.350 09:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 75139 00:12:10.350 [2024-10-15 09:11:28.199189] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:10.350 09:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 75139 00:12:10.919 [2024-10-15 09:11:28.558362] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:12.301 09:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pitSmZbp91 00:12:12.301 09:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:12.301 09:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:12.301 09:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:12.301 09:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:12.301 09:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:12.301 09:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:12.301 09:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:12.301 00:12:12.301 real 0m4.934s 00:12:12.301 user 0m5.842s 00:12:12.301 sys 0m0.598s 00:12:12.301 ************************************ 00:12:12.301 END TEST raid_read_error_test 00:12:12.301 ************************************ 00:12:12.301 09:11:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:12.301 09:11:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.301 09:11:29 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:12.301 09:11:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:12.301 09:11:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:12.301 09:11:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:12.301 ************************************ 00:12:12.301 START TEST raid_write_error_test 00:12:12.301 ************************************ 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.D7UdmKCyeQ 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75284 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75284 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 75284 ']' 00:12:12.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:12.301 09:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.301 [2024-10-15 09:11:30.010475] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:12:12.301 [2024-10-15 09:11:30.010709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75284 ] 00:12:12.301 [2024-10-15 09:11:30.158824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.561 [2024-10-15 09:11:30.312194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.850 [2024-10-15 09:11:30.574214] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.850 [2024-10-15 09:11:30.574445] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:13.118 09:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:13.118 09:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:13.118 09:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:13.118 09:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:13.118 09:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.118 09:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.118 BaseBdev1_malloc 00:12:13.118 09:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.118 09:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:13.118 09:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.118 09:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.118 true 00:12:13.118 09:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.118 09:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:13.118 09:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.118 09:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.118 [2024-10-15 09:11:31.000421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:13.118 [2024-10-15 09:11:31.000622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.118 [2024-10-15 09:11:31.000675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:13.118 [2024-10-15 09:11:31.000716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.118 [2024-10-15 09:11:31.004284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.118 [2024-10-15 09:11:31.004353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:13.118 BaseBdev1 00:12:13.118 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.118 09:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:13.118 09:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:13.118 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.118 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.379 BaseBdev2_malloc 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.379 true 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.379 [2024-10-15 09:11:31.084779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:13.379 [2024-10-15 09:11:31.084878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.379 [2024-10-15 09:11:31.084907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:13.379 [2024-10-15 09:11:31.084923] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.379 [2024-10-15 09:11:31.088025] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.379 [2024-10-15 09:11:31.088103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:13.379 BaseBdev2 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.379 BaseBdev3_malloc 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.379 true 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.379 [2024-10-15 09:11:31.181746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:13.379 [2024-10-15 09:11:31.181871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.379 [2024-10-15 09:11:31.181920] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:13.379 [2024-10-15 09:11:31.181965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.379 [2024-10-15 09:11:31.185060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.379 [2024-10-15 09:11:31.185180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:13.379 BaseBdev3 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.379 BaseBdev4_malloc 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.379 true 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.379 [2024-10-15 09:11:31.264072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:13.379 [2024-10-15 09:11:31.264203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.379 [2024-10-15 09:11:31.264251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:13.379 [2024-10-15 09:11:31.264317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.379 [2024-10-15 09:11:31.267216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.379 [2024-10-15 09:11:31.267311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:13.379 BaseBdev4 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.379 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.640 [2024-10-15 09:11:31.276253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:13.640 [2024-10-15 09:11:31.278800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:13.640 [2024-10-15 09:11:31.278900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:13.640 [2024-10-15 09:11:31.278977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:13.640 [2024-10-15 09:11:31.279263] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:13.640 [2024-10-15 09:11:31.279280] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:13.640 [2024-10-15 09:11:31.279631] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:13.640 [2024-10-15 09:11:31.279881] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:13.640 [2024-10-15 09:11:31.279894] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:13.640 [2024-10-15 09:11:31.280170] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.640 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.640 09:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:13.640 09:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:13.640 09:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.640 09:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.640 09:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.640 09:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.640 09:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.640 09:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.640 09:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.640 09:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.640 09:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.640 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.640 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.640 09:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.640 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.640 09:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.640 "name": "raid_bdev1", 00:12:13.640 "uuid": "9a325da0-0abd-41ef-83a3-a1ca9dbf96df", 00:12:13.640 "strip_size_kb": 0, 00:12:13.640 "state": "online", 00:12:13.640 "raid_level": "raid1", 00:12:13.640 "superblock": true, 00:12:13.640 "num_base_bdevs": 4, 00:12:13.640 "num_base_bdevs_discovered": 4, 00:12:13.640 "num_base_bdevs_operational": 4, 00:12:13.640 "base_bdevs_list": [ 00:12:13.640 { 00:12:13.640 "name": "BaseBdev1", 00:12:13.640 "uuid": "a5cd8ff7-d499-5591-a98d-3f40da717d43", 00:12:13.640 "is_configured": true, 00:12:13.640 "data_offset": 2048, 00:12:13.640 "data_size": 63488 00:12:13.640 }, 00:12:13.640 { 00:12:13.640 "name": "BaseBdev2", 00:12:13.640 "uuid": "28dc5bff-106b-574e-b9c4-cd9adfe5496d", 00:12:13.640 "is_configured": true, 00:12:13.640 "data_offset": 2048, 00:12:13.640 "data_size": 63488 00:12:13.640 }, 00:12:13.640 { 00:12:13.640 "name": "BaseBdev3", 00:12:13.640 "uuid": "1a0d2241-4790-5802-b572-fc884133743f", 00:12:13.640 "is_configured": true, 00:12:13.640 "data_offset": 2048, 00:12:13.640 "data_size": 63488 00:12:13.640 }, 00:12:13.640 { 00:12:13.640 "name": "BaseBdev4", 00:12:13.640 "uuid": "84b1a955-c150-5ca2-bc3d-a10e2f18e38d", 00:12:13.640 "is_configured": true, 00:12:13.640 "data_offset": 2048, 00:12:13.640 "data_size": 63488 00:12:13.640 } 00:12:13.640 ] 00:12:13.640 }' 00:12:13.640 09:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.640 09:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.899 09:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:13.899 09:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:14.159 [2024-10-15 09:11:31.853198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:15.096 09:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:15.096 09:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.096 09:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.096 [2024-10-15 09:11:32.760830] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:15.096 [2024-10-15 09:11:32.761003] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:15.096 [2024-10-15 09:11:32.761321] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:15.096 09:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.096 09:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:15.096 09:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:15.096 09:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:15.096 09:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:15.096 09:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:15.096 09:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.096 09:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.096 09:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.096 09:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.096 09:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:15.096 09:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.096 09:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.096 09:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.096 09:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.096 09:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.096 09:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.096 09:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.096 09:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.096 09:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.096 09:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.096 "name": "raid_bdev1", 00:12:15.096 "uuid": "9a325da0-0abd-41ef-83a3-a1ca9dbf96df", 00:12:15.096 "strip_size_kb": 0, 00:12:15.096 "state": "online", 00:12:15.096 "raid_level": "raid1", 00:12:15.096 "superblock": true, 00:12:15.096 "num_base_bdevs": 4, 00:12:15.096 "num_base_bdevs_discovered": 3, 00:12:15.096 "num_base_bdevs_operational": 3, 00:12:15.096 "base_bdevs_list": [ 00:12:15.096 { 00:12:15.096 "name": null, 00:12:15.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.096 "is_configured": false, 00:12:15.096 "data_offset": 0, 00:12:15.096 "data_size": 63488 00:12:15.096 }, 00:12:15.096 { 00:12:15.096 "name": "BaseBdev2", 00:12:15.096 "uuid": "28dc5bff-106b-574e-b9c4-cd9adfe5496d", 00:12:15.096 "is_configured": true, 00:12:15.096 "data_offset": 2048, 00:12:15.096 "data_size": 63488 00:12:15.096 }, 00:12:15.096 { 00:12:15.096 "name": "BaseBdev3", 00:12:15.096 "uuid": "1a0d2241-4790-5802-b572-fc884133743f", 00:12:15.096 "is_configured": true, 00:12:15.096 "data_offset": 2048, 00:12:15.097 "data_size": 63488 00:12:15.097 }, 00:12:15.097 { 00:12:15.097 "name": "BaseBdev4", 00:12:15.097 "uuid": "84b1a955-c150-5ca2-bc3d-a10e2f18e38d", 00:12:15.097 "is_configured": true, 00:12:15.097 "data_offset": 2048, 00:12:15.097 "data_size": 63488 00:12:15.097 } 00:12:15.097 ] 00:12:15.097 }' 00:12:15.097 09:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.097 09:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.355 09:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:15.355 09:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.355 09:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.355 [2024-10-15 09:11:33.239448] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:15.355 [2024-10-15 09:11:33.239492] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:15.355 [2024-10-15 09:11:33.242881] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:15.355 [2024-10-15 09:11:33.242969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.355 [2024-10-15 09:11:33.243197] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:15.356 [2024-10-15 09:11:33.243266] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:15.356 { 00:12:15.356 "results": [ 00:12:15.356 { 00:12:15.356 "job": "raid_bdev1", 00:12:15.356 "core_mask": "0x1", 00:12:15.356 "workload": "randrw", 00:12:15.356 "percentage": 50, 00:12:15.356 "status": "finished", 00:12:15.356 "queue_depth": 1, 00:12:15.356 "io_size": 131072, 00:12:15.356 "runtime": 1.386204, 00:12:15.356 "iops": 7389.244295933355, 00:12:15.356 "mibps": 923.6555369916694, 00:12:15.356 "io_failed": 0, 00:12:15.356 "io_timeout": 0, 00:12:15.356 "avg_latency_us": 132.24640877335762, 00:12:15.356 "min_latency_us": 25.3764192139738, 00:12:15.356 "max_latency_us": 1874.5013100436681 00:12:15.356 } 00:12:15.356 ], 00:12:15.356 "core_count": 1 00:12:15.356 } 00:12:15.356 09:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.356 09:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75284 00:12:15.356 09:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 75284 ']' 00:12:15.356 09:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 75284 00:12:15.356 09:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:12:15.616 09:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:15.616 09:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75284 00:12:15.616 09:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:15.616 09:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:15.616 09:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75284' 00:12:15.616 killing process with pid 75284 00:12:15.616 09:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 75284 00:12:15.616 [2024-10-15 09:11:33.289644] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:15.616 09:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 75284 00:12:15.876 [2024-10-15 09:11:33.686207] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:17.254 09:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.D7UdmKCyeQ 00:12:17.254 09:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:17.254 09:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:17.254 09:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:17.254 09:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:17.254 09:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:17.254 09:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:17.254 09:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:17.254 00:12:17.254 real 0m5.232s 00:12:17.254 user 0m6.007s 00:12:17.254 sys 0m0.779s 00:12:17.254 ************************************ 00:12:17.254 END TEST raid_write_error_test 00:12:17.254 ************************************ 00:12:17.254 09:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:17.254 09:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.513 09:11:35 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:17.513 09:11:35 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:17.513 09:11:35 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:17.513 09:11:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:17.513 09:11:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:17.513 09:11:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:17.513 ************************************ 00:12:17.513 START TEST raid_rebuild_test 00:12:17.513 ************************************ 00:12:17.513 09:11:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:12:17.513 09:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:17.513 09:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:17.513 09:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:17.513 09:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:17.513 09:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:17.513 09:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:17.513 09:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:17.513 09:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:17.513 09:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:17.513 09:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:17.513 09:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:17.513 09:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:17.513 09:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:17.513 09:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:17.513 09:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:17.513 09:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:17.513 09:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:17.513 09:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:17.513 09:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:17.513 09:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:17.513 09:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:17.513 09:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:17.513 09:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:17.513 09:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75435 00:12:17.513 09:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:17.513 09:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75435 00:12:17.513 09:11:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 75435 ']' 00:12:17.513 09:11:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.513 09:11:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:17.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.513 09:11:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.513 09:11:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:17.513 09:11:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.513 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:17.513 Zero copy mechanism will not be used. 00:12:17.513 [2024-10-15 09:11:35.311474] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:12:17.513 [2024-10-15 09:11:35.311598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75435 ] 00:12:17.772 [2024-10-15 09:11:35.459194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.772 [2024-10-15 09:11:35.577723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.031 [2024-10-15 09:11:35.790256] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:18.031 [2024-10-15 09:11:35.790334] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:18.291 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:18.291 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:12:18.291 09:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:18.291 09:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:18.291 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.291 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.550 BaseBdev1_malloc 00:12:18.550 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.550 09:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:18.550 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.550 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.550 [2024-10-15 09:11:36.214350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:18.550 [2024-10-15 09:11:36.214452] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.550 [2024-10-15 09:11:36.214477] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:18.550 [2024-10-15 09:11:36.214489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.550 [2024-10-15 09:11:36.216679] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.550 [2024-10-15 09:11:36.216735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:18.550 BaseBdev1 00:12:18.550 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.550 09:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:18.550 09:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:18.550 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.550 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.550 BaseBdev2_malloc 00:12:18.550 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.550 09:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:18.550 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.550 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.550 [2024-10-15 09:11:36.269105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:18.550 [2024-10-15 09:11:36.269199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.550 [2024-10-15 09:11:36.269220] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:18.550 [2024-10-15 09:11:36.269233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.550 [2024-10-15 09:11:36.271345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.550 [2024-10-15 09:11:36.271393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:18.550 BaseBdev2 00:12:18.550 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.551 09:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:18.551 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.551 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.551 spare_malloc 00:12:18.551 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.551 09:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:18.551 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.551 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.551 spare_delay 00:12:18.551 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.551 09:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:18.551 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.551 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.551 [2024-10-15 09:11:36.350491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:18.551 [2024-10-15 09:11:36.350590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.551 [2024-10-15 09:11:36.350621] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:18.551 [2024-10-15 09:11:36.350635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.551 [2024-10-15 09:11:36.353113] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.551 [2024-10-15 09:11:36.353182] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:18.551 spare 00:12:18.551 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.551 09:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:18.551 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.551 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.551 [2024-10-15 09:11:36.362490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:18.551 [2024-10-15 09:11:36.364400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:18.551 [2024-10-15 09:11:36.364622] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:18.551 [2024-10-15 09:11:36.364641] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:18.551 [2024-10-15 09:11:36.365004] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:18.551 [2024-10-15 09:11:36.365190] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:18.551 [2024-10-15 09:11:36.365201] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:18.551 [2024-10-15 09:11:36.365392] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.551 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.551 09:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:18.551 09:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.551 09:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.551 09:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.551 09:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.551 09:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:18.551 09:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.551 09:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.551 09:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.551 09:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.551 09:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.551 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.551 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.551 09:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.551 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.551 09:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.551 "name": "raid_bdev1", 00:12:18.551 "uuid": "e967405d-f87f-4eb6-b29d-ab8362b95e22", 00:12:18.551 "strip_size_kb": 0, 00:12:18.551 "state": "online", 00:12:18.551 "raid_level": "raid1", 00:12:18.551 "superblock": false, 00:12:18.551 "num_base_bdevs": 2, 00:12:18.551 "num_base_bdevs_discovered": 2, 00:12:18.551 "num_base_bdevs_operational": 2, 00:12:18.551 "base_bdevs_list": [ 00:12:18.551 { 00:12:18.551 "name": "BaseBdev1", 00:12:18.551 "uuid": "03356ae8-b8da-5942-b27f-a3a3743a08ae", 00:12:18.551 "is_configured": true, 00:12:18.551 "data_offset": 0, 00:12:18.551 "data_size": 65536 00:12:18.551 }, 00:12:18.551 { 00:12:18.551 "name": "BaseBdev2", 00:12:18.551 "uuid": "8e90d229-82d7-5d6f-8339-8a1761967d13", 00:12:18.551 "is_configured": true, 00:12:18.551 "data_offset": 0, 00:12:18.551 "data_size": 65536 00:12:18.551 } 00:12:18.551 ] 00:12:18.551 }' 00:12:18.551 09:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.551 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.119 09:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:19.119 09:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:19.119 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.119 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.119 [2024-10-15 09:11:36.834108] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:19.119 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.119 09:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:19.119 09:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.119 09:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:19.119 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.119 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.119 09:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.119 09:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:19.119 09:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:19.119 09:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:19.119 09:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:19.119 09:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:19.119 09:11:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:19.119 09:11:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:19.119 09:11:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:19.119 09:11:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:19.119 09:11:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:19.119 09:11:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:19.119 09:11:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:19.119 09:11:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:19.119 09:11:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:19.378 [2024-10-15 09:11:37.153262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:19.378 /dev/nbd0 00:12:19.378 09:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:19.378 09:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:19.378 09:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:19.378 09:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:19.379 09:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:19.379 09:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:19.379 09:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:19.379 09:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:19.379 09:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:19.379 09:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:19.379 09:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.379 1+0 records in 00:12:19.379 1+0 records out 00:12:19.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000535219 s, 7.7 MB/s 00:12:19.379 09:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.379 09:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:19.379 09:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.379 09:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:19.379 09:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:19.379 09:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:19.379 09:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:19.379 09:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:19.379 09:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:19.379 09:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:24.685 65536+0 records in 00:12:24.685 65536+0 records out 00:12:24.685 33554432 bytes (34 MB, 32 MiB) copied, 5.31993 s, 6.3 MB/s 00:12:24.685 09:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:24.685 09:11:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:24.685 09:11:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:24.685 09:11:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:24.685 09:11:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:24.685 09:11:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:24.685 09:11:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:24.946 [2024-10-15 09:11:42.806642] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.946 09:11:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:24.946 09:11:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:24.946 09:11:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:24.946 09:11:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:24.946 09:11:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:24.946 09:11:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:25.205 09:11:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:25.205 09:11:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:25.205 09:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:25.205 09:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.205 09:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.205 [2024-10-15 09:11:42.850730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:25.205 09:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.205 09:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:25.205 09:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.205 09:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.205 09:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.205 09:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.205 09:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:25.205 09:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.205 09:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.205 09:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.205 09:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.205 09:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.206 09:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.206 09:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.206 09:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.206 09:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.206 09:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.206 "name": "raid_bdev1", 00:12:25.206 "uuid": "e967405d-f87f-4eb6-b29d-ab8362b95e22", 00:12:25.206 "strip_size_kb": 0, 00:12:25.206 "state": "online", 00:12:25.206 "raid_level": "raid1", 00:12:25.206 "superblock": false, 00:12:25.206 "num_base_bdevs": 2, 00:12:25.206 "num_base_bdevs_discovered": 1, 00:12:25.206 "num_base_bdevs_operational": 1, 00:12:25.206 "base_bdevs_list": [ 00:12:25.206 { 00:12:25.206 "name": null, 00:12:25.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.206 "is_configured": false, 00:12:25.206 "data_offset": 0, 00:12:25.206 "data_size": 65536 00:12:25.206 }, 00:12:25.206 { 00:12:25.206 "name": "BaseBdev2", 00:12:25.206 "uuid": "8e90d229-82d7-5d6f-8339-8a1761967d13", 00:12:25.206 "is_configured": true, 00:12:25.206 "data_offset": 0, 00:12:25.206 "data_size": 65536 00:12:25.206 } 00:12:25.206 ] 00:12:25.206 }' 00:12:25.206 09:11:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.206 09:11:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.464 09:11:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:25.464 09:11:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.464 09:11:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.464 [2024-10-15 09:11:43.329935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:25.464 [2024-10-15 09:11:43.347519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:25.464 09:11:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.464 [2024-10-15 09:11:43.349491] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:25.464 09:11:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:26.842 09:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.843 "name": "raid_bdev1", 00:12:26.843 "uuid": "e967405d-f87f-4eb6-b29d-ab8362b95e22", 00:12:26.843 "strip_size_kb": 0, 00:12:26.843 "state": "online", 00:12:26.843 "raid_level": "raid1", 00:12:26.843 "superblock": false, 00:12:26.843 "num_base_bdevs": 2, 00:12:26.843 "num_base_bdevs_discovered": 2, 00:12:26.843 "num_base_bdevs_operational": 2, 00:12:26.843 "process": { 00:12:26.843 "type": "rebuild", 00:12:26.843 "target": "spare", 00:12:26.843 "progress": { 00:12:26.843 "blocks": 20480, 00:12:26.843 "percent": 31 00:12:26.843 } 00:12:26.843 }, 00:12:26.843 "base_bdevs_list": [ 00:12:26.843 { 00:12:26.843 "name": "spare", 00:12:26.843 "uuid": "fc1eb598-9eb6-5b4f-936a-d165ab876873", 00:12:26.843 "is_configured": true, 00:12:26.843 "data_offset": 0, 00:12:26.843 "data_size": 65536 00:12:26.843 }, 00:12:26.843 { 00:12:26.843 "name": "BaseBdev2", 00:12:26.843 "uuid": "8e90d229-82d7-5d6f-8339-8a1761967d13", 00:12:26.843 "is_configured": true, 00:12:26.843 "data_offset": 0, 00:12:26.843 "data_size": 65536 00:12:26.843 } 00:12:26.843 ] 00:12:26.843 }' 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.843 [2024-10-15 09:11:44.493001] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:26.843 [2024-10-15 09:11:44.556384] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:26.843 [2024-10-15 09:11:44.556532] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.843 [2024-10-15 09:11:44.556555] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:26.843 [2024-10-15 09:11:44.556574] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.843 "name": "raid_bdev1", 00:12:26.843 "uuid": "e967405d-f87f-4eb6-b29d-ab8362b95e22", 00:12:26.843 "strip_size_kb": 0, 00:12:26.843 "state": "online", 00:12:26.843 "raid_level": "raid1", 00:12:26.843 "superblock": false, 00:12:26.843 "num_base_bdevs": 2, 00:12:26.843 "num_base_bdevs_discovered": 1, 00:12:26.843 "num_base_bdevs_operational": 1, 00:12:26.843 "base_bdevs_list": [ 00:12:26.843 { 00:12:26.843 "name": null, 00:12:26.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.843 "is_configured": false, 00:12:26.843 "data_offset": 0, 00:12:26.843 "data_size": 65536 00:12:26.843 }, 00:12:26.843 { 00:12:26.843 "name": "BaseBdev2", 00:12:26.843 "uuid": "8e90d229-82d7-5d6f-8339-8a1761967d13", 00:12:26.843 "is_configured": true, 00:12:26.843 "data_offset": 0, 00:12:26.843 "data_size": 65536 00:12:26.843 } 00:12:26.843 ] 00:12:26.843 }' 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.843 09:11:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.444 09:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:27.444 09:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.444 09:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:27.444 09:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:27.444 09:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.444 09:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.444 09:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.444 09:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.444 09:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.444 09:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.444 09:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.444 "name": "raid_bdev1", 00:12:27.444 "uuid": "e967405d-f87f-4eb6-b29d-ab8362b95e22", 00:12:27.444 "strip_size_kb": 0, 00:12:27.444 "state": "online", 00:12:27.444 "raid_level": "raid1", 00:12:27.444 "superblock": false, 00:12:27.444 "num_base_bdevs": 2, 00:12:27.444 "num_base_bdevs_discovered": 1, 00:12:27.444 "num_base_bdevs_operational": 1, 00:12:27.444 "base_bdevs_list": [ 00:12:27.444 { 00:12:27.444 "name": null, 00:12:27.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.444 "is_configured": false, 00:12:27.444 "data_offset": 0, 00:12:27.444 "data_size": 65536 00:12:27.444 }, 00:12:27.444 { 00:12:27.444 "name": "BaseBdev2", 00:12:27.444 "uuid": "8e90d229-82d7-5d6f-8339-8a1761967d13", 00:12:27.444 "is_configured": true, 00:12:27.444 "data_offset": 0, 00:12:27.444 "data_size": 65536 00:12:27.444 } 00:12:27.444 ] 00:12:27.444 }' 00:12:27.444 09:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.444 09:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:27.444 09:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.444 09:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:27.444 09:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:27.444 09:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.444 09:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.444 [2024-10-15 09:11:45.188918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:27.444 [2024-10-15 09:11:45.208337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:27.444 09:11:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.444 09:11:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:27.444 [2024-10-15 09:11:45.210830] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:28.379 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:28.379 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.379 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:28.379 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:28.379 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.379 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.379 09:11:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.379 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.379 09:11:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.379 09:11:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.379 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.379 "name": "raid_bdev1", 00:12:28.379 "uuid": "e967405d-f87f-4eb6-b29d-ab8362b95e22", 00:12:28.379 "strip_size_kb": 0, 00:12:28.379 "state": "online", 00:12:28.379 "raid_level": "raid1", 00:12:28.379 "superblock": false, 00:12:28.379 "num_base_bdevs": 2, 00:12:28.379 "num_base_bdevs_discovered": 2, 00:12:28.379 "num_base_bdevs_operational": 2, 00:12:28.379 "process": { 00:12:28.379 "type": "rebuild", 00:12:28.379 "target": "spare", 00:12:28.379 "progress": { 00:12:28.379 "blocks": 20480, 00:12:28.379 "percent": 31 00:12:28.379 } 00:12:28.379 }, 00:12:28.379 "base_bdevs_list": [ 00:12:28.379 { 00:12:28.379 "name": "spare", 00:12:28.379 "uuid": "fc1eb598-9eb6-5b4f-936a-d165ab876873", 00:12:28.379 "is_configured": true, 00:12:28.379 "data_offset": 0, 00:12:28.379 "data_size": 65536 00:12:28.379 }, 00:12:28.379 { 00:12:28.379 "name": "BaseBdev2", 00:12:28.379 "uuid": "8e90d229-82d7-5d6f-8339-8a1761967d13", 00:12:28.379 "is_configured": true, 00:12:28.379 "data_offset": 0, 00:12:28.379 "data_size": 65536 00:12:28.379 } 00:12:28.379 ] 00:12:28.379 }' 00:12:28.379 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.636 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:28.636 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.636 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:28.637 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:28.637 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:28.637 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:28.637 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:28.637 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=390 00:12:28.637 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:28.637 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:28.637 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.637 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:28.637 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:28.637 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.637 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.637 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.637 09:11:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.637 09:11:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.637 09:11:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.637 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.637 "name": "raid_bdev1", 00:12:28.637 "uuid": "e967405d-f87f-4eb6-b29d-ab8362b95e22", 00:12:28.637 "strip_size_kb": 0, 00:12:28.637 "state": "online", 00:12:28.637 "raid_level": "raid1", 00:12:28.637 "superblock": false, 00:12:28.637 "num_base_bdevs": 2, 00:12:28.637 "num_base_bdevs_discovered": 2, 00:12:28.637 "num_base_bdevs_operational": 2, 00:12:28.637 "process": { 00:12:28.637 "type": "rebuild", 00:12:28.637 "target": "spare", 00:12:28.637 "progress": { 00:12:28.637 "blocks": 22528, 00:12:28.637 "percent": 34 00:12:28.637 } 00:12:28.637 }, 00:12:28.637 "base_bdevs_list": [ 00:12:28.637 { 00:12:28.637 "name": "spare", 00:12:28.637 "uuid": "fc1eb598-9eb6-5b4f-936a-d165ab876873", 00:12:28.637 "is_configured": true, 00:12:28.637 "data_offset": 0, 00:12:28.637 "data_size": 65536 00:12:28.637 }, 00:12:28.637 { 00:12:28.637 "name": "BaseBdev2", 00:12:28.637 "uuid": "8e90d229-82d7-5d6f-8339-8a1761967d13", 00:12:28.637 "is_configured": true, 00:12:28.637 "data_offset": 0, 00:12:28.637 "data_size": 65536 00:12:28.637 } 00:12:28.637 ] 00:12:28.637 }' 00:12:28.637 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.637 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:28.637 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.637 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:28.637 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:30.010 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:30.010 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:30.010 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.010 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:30.010 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:30.010 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.010 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.010 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.010 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.010 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.010 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.010 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.010 "name": "raid_bdev1", 00:12:30.010 "uuid": "e967405d-f87f-4eb6-b29d-ab8362b95e22", 00:12:30.010 "strip_size_kb": 0, 00:12:30.010 "state": "online", 00:12:30.010 "raid_level": "raid1", 00:12:30.010 "superblock": false, 00:12:30.011 "num_base_bdevs": 2, 00:12:30.011 "num_base_bdevs_discovered": 2, 00:12:30.011 "num_base_bdevs_operational": 2, 00:12:30.011 "process": { 00:12:30.011 "type": "rebuild", 00:12:30.011 "target": "spare", 00:12:30.011 "progress": { 00:12:30.011 "blocks": 45056, 00:12:30.011 "percent": 68 00:12:30.011 } 00:12:30.011 }, 00:12:30.011 "base_bdevs_list": [ 00:12:30.011 { 00:12:30.011 "name": "spare", 00:12:30.011 "uuid": "fc1eb598-9eb6-5b4f-936a-d165ab876873", 00:12:30.011 "is_configured": true, 00:12:30.011 "data_offset": 0, 00:12:30.011 "data_size": 65536 00:12:30.011 }, 00:12:30.011 { 00:12:30.011 "name": "BaseBdev2", 00:12:30.011 "uuid": "8e90d229-82d7-5d6f-8339-8a1761967d13", 00:12:30.011 "is_configured": true, 00:12:30.011 "data_offset": 0, 00:12:30.011 "data_size": 65536 00:12:30.011 } 00:12:30.011 ] 00:12:30.011 }' 00:12:30.011 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.011 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:30.011 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.011 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:30.011 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:30.577 [2024-10-15 09:11:48.428464] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:30.577 [2024-10-15 09:11:48.428561] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:30.577 [2024-10-15 09:11:48.428609] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.836 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:30.836 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:30.836 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.836 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:30.836 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:30.836 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.836 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.836 09:11:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.836 09:11:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.836 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.836 09:11:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.836 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.836 "name": "raid_bdev1", 00:12:30.836 "uuid": "e967405d-f87f-4eb6-b29d-ab8362b95e22", 00:12:30.836 "strip_size_kb": 0, 00:12:30.836 "state": "online", 00:12:30.836 "raid_level": "raid1", 00:12:30.836 "superblock": false, 00:12:30.837 "num_base_bdevs": 2, 00:12:30.837 "num_base_bdevs_discovered": 2, 00:12:30.837 "num_base_bdevs_operational": 2, 00:12:30.837 "base_bdevs_list": [ 00:12:30.837 { 00:12:30.837 "name": "spare", 00:12:30.837 "uuid": "fc1eb598-9eb6-5b4f-936a-d165ab876873", 00:12:30.837 "is_configured": true, 00:12:30.837 "data_offset": 0, 00:12:30.837 "data_size": 65536 00:12:30.837 }, 00:12:30.837 { 00:12:30.837 "name": "BaseBdev2", 00:12:30.837 "uuid": "8e90d229-82d7-5d6f-8339-8a1761967d13", 00:12:30.837 "is_configured": true, 00:12:30.837 "data_offset": 0, 00:12:30.837 "data_size": 65536 00:12:30.837 } 00:12:30.837 ] 00:12:30.837 }' 00:12:30.837 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.837 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:30.837 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.098 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:31.098 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:31.098 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:31.098 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.098 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:31.098 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:31.098 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.098 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.098 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.098 09:11:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.098 09:11:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.098 09:11:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.098 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.098 "name": "raid_bdev1", 00:12:31.098 "uuid": "e967405d-f87f-4eb6-b29d-ab8362b95e22", 00:12:31.098 "strip_size_kb": 0, 00:12:31.098 "state": "online", 00:12:31.098 "raid_level": "raid1", 00:12:31.098 "superblock": false, 00:12:31.098 "num_base_bdevs": 2, 00:12:31.098 "num_base_bdevs_discovered": 2, 00:12:31.098 "num_base_bdevs_operational": 2, 00:12:31.098 "base_bdevs_list": [ 00:12:31.098 { 00:12:31.098 "name": "spare", 00:12:31.098 "uuid": "fc1eb598-9eb6-5b4f-936a-d165ab876873", 00:12:31.098 "is_configured": true, 00:12:31.098 "data_offset": 0, 00:12:31.099 "data_size": 65536 00:12:31.099 }, 00:12:31.099 { 00:12:31.099 "name": "BaseBdev2", 00:12:31.099 "uuid": "8e90d229-82d7-5d6f-8339-8a1761967d13", 00:12:31.099 "is_configured": true, 00:12:31.099 "data_offset": 0, 00:12:31.099 "data_size": 65536 00:12:31.099 } 00:12:31.099 ] 00:12:31.099 }' 00:12:31.099 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.099 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:31.099 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.099 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:31.099 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:31.099 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.099 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.099 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.099 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.099 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:31.099 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.099 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.099 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.099 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.099 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.099 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.099 09:11:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.099 09:11:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.099 09:11:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.099 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.099 "name": "raid_bdev1", 00:12:31.099 "uuid": "e967405d-f87f-4eb6-b29d-ab8362b95e22", 00:12:31.099 "strip_size_kb": 0, 00:12:31.099 "state": "online", 00:12:31.099 "raid_level": "raid1", 00:12:31.099 "superblock": false, 00:12:31.099 "num_base_bdevs": 2, 00:12:31.099 "num_base_bdevs_discovered": 2, 00:12:31.099 "num_base_bdevs_operational": 2, 00:12:31.099 "base_bdevs_list": [ 00:12:31.099 { 00:12:31.099 "name": "spare", 00:12:31.099 "uuid": "fc1eb598-9eb6-5b4f-936a-d165ab876873", 00:12:31.099 "is_configured": true, 00:12:31.099 "data_offset": 0, 00:12:31.099 "data_size": 65536 00:12:31.099 }, 00:12:31.099 { 00:12:31.099 "name": "BaseBdev2", 00:12:31.099 "uuid": "8e90d229-82d7-5d6f-8339-8a1761967d13", 00:12:31.099 "is_configured": true, 00:12:31.099 "data_offset": 0, 00:12:31.099 "data_size": 65536 00:12:31.099 } 00:12:31.099 ] 00:12:31.099 }' 00:12:31.099 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.099 09:11:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.669 09:11:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:31.669 09:11:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.669 09:11:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.669 [2024-10-15 09:11:49.371953] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:31.669 [2024-10-15 09:11:49.371993] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:31.669 [2024-10-15 09:11:49.372082] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:31.669 [2024-10-15 09:11:49.372155] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:31.669 [2024-10-15 09:11:49.372166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:31.669 09:11:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.669 09:11:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:31.669 09:11:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.669 09:11:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.669 09:11:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.669 09:11:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.669 09:11:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:31.669 09:11:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:31.669 09:11:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:31.669 09:11:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:31.669 09:11:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:31.669 09:11:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:31.669 09:11:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:31.669 09:11:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:31.669 09:11:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:31.669 09:11:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:31.669 09:11:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:31.669 09:11:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:31.669 09:11:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:31.929 /dev/nbd0 00:12:31.929 09:11:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:31.929 09:11:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:31.929 09:11:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:31.929 09:11:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:31.929 09:11:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:31.929 09:11:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:31.929 09:11:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:31.929 09:11:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:31.929 09:11:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:31.929 09:11:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:31.929 09:11:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:31.929 1+0 records in 00:12:31.929 1+0 records out 00:12:31.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324925 s, 12.6 MB/s 00:12:31.929 09:11:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.929 09:11:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:31.929 09:11:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.929 09:11:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:31.929 09:11:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:31.929 09:11:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:31.929 09:11:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:31.929 09:11:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:32.187 /dev/nbd1 00:12:32.187 09:11:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:32.187 09:11:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:32.187 09:11:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:32.187 09:11:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:32.187 09:11:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:32.187 09:11:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:32.187 09:11:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:32.187 09:11:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:32.187 09:11:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:32.187 09:11:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:32.187 09:11:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:32.187 1+0 records in 00:12:32.187 1+0 records out 00:12:32.188 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396897 s, 10.3 MB/s 00:12:32.188 09:11:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.188 09:11:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:32.188 09:11:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.188 09:11:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:32.188 09:11:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:32.188 09:11:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:32.188 09:11:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:32.188 09:11:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:32.446 09:11:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:32.446 09:11:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:32.446 09:11:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:32.446 09:11:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:32.446 09:11:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:32.446 09:11:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:32.446 09:11:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:32.706 09:11:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:32.706 09:11:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:32.706 09:11:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:32.706 09:11:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:32.706 09:11:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:32.706 09:11:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:32.706 09:11:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:32.706 09:11:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:32.706 09:11:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:32.706 09:11:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:32.966 09:11:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:32.966 09:11:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:32.966 09:11:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:32.966 09:11:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:32.966 09:11:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:32.966 09:11:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:32.966 09:11:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:32.966 09:11:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:32.966 09:11:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:32.966 09:11:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75435 00:12:32.966 09:11:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 75435 ']' 00:12:32.966 09:11:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 75435 00:12:32.966 09:11:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:12:32.966 09:11:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:32.966 09:11:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75435 00:12:32.966 09:11:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:32.966 09:11:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:32.966 killing process with pid 75435 00:12:32.966 09:11:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75435' 00:12:32.966 09:11:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 75435 00:12:32.966 Received shutdown signal, test time was about 60.000000 seconds 00:12:32.966 00:12:32.966 Latency(us) 00:12:32.966 [2024-10-15T09:11:50.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:32.966 [2024-10-15T09:11:50.862Z] =================================================================================================================== 00:12:32.966 [2024-10-15T09:11:50.862Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:32.966 [2024-10-15 09:11:50.679026] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:32.966 09:11:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 75435 00:12:33.225 [2024-10-15 09:11:51.008565] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:34.603 00:12:34.603 real 0m16.957s 00:12:34.603 user 0m18.586s 00:12:34.603 sys 0m3.560s 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.603 ************************************ 00:12:34.603 END TEST raid_rebuild_test 00:12:34.603 ************************************ 00:12:34.603 09:11:52 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:34.603 09:11:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:34.603 09:11:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:34.603 09:11:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:34.603 ************************************ 00:12:34.603 START TEST raid_rebuild_test_sb 00:12:34.603 ************************************ 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75869 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75869 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 75869 ']' 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:34.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:34.603 09:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.603 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:34.603 Zero copy mechanism will not be used. 00:12:34.603 [2024-10-15 09:11:52.347773] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:12:34.603 [2024-10-15 09:11:52.347903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75869 ] 00:12:34.860 [2024-10-15 09:11:52.500200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.860 [2024-10-15 09:11:52.648883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.117 [2024-10-15 09:11:52.865754] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:35.117 [2024-10-15 09:11:52.865814] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:35.375 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:35.375 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:35.375 09:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:35.375 09:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:35.375 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.375 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.634 BaseBdev1_malloc 00:12:35.634 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.634 09:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:35.634 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.634 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.634 [2024-10-15 09:11:53.304395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:35.634 [2024-10-15 09:11:53.304481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.634 [2024-10-15 09:11:53.304527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:35.634 [2024-10-15 09:11:53.304548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.634 [2024-10-15 09:11:53.307221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.634 [2024-10-15 09:11:53.307270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:35.634 BaseBdev1 00:12:35.634 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.634 09:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:35.634 09:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:35.634 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.634 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.634 BaseBdev2_malloc 00:12:35.634 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.634 09:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:35.634 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.634 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.635 [2024-10-15 09:11:53.360272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:35.635 [2024-10-15 09:11:53.360371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.635 [2024-10-15 09:11:53.360403] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:35.635 [2024-10-15 09:11:53.360428] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.635 [2024-10-15 09:11:53.363540] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.635 [2024-10-15 09:11:53.363596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:35.635 BaseBdev2 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.635 spare_malloc 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.635 spare_delay 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.635 [2024-10-15 09:11:53.438699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:35.635 [2024-10-15 09:11:53.438783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.635 [2024-10-15 09:11:53.438810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:35.635 [2024-10-15 09:11:53.438823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.635 [2024-10-15 09:11:53.441411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.635 [2024-10-15 09:11:53.441466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:35.635 spare 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.635 [2024-10-15 09:11:53.446745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:35.635 [2024-10-15 09:11:53.448919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:35.635 [2024-10-15 09:11:53.449148] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:35.635 [2024-10-15 09:11:53.449178] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:35.635 [2024-10-15 09:11:53.449511] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:35.635 [2024-10-15 09:11:53.449749] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:35.635 [2024-10-15 09:11:53.449771] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:35.635 [2024-10-15 09:11:53.449976] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.635 "name": "raid_bdev1", 00:12:35.635 "uuid": "0704c0d3-01a0-45e6-970c-f7b4240205a2", 00:12:35.635 "strip_size_kb": 0, 00:12:35.635 "state": "online", 00:12:35.635 "raid_level": "raid1", 00:12:35.635 "superblock": true, 00:12:35.635 "num_base_bdevs": 2, 00:12:35.635 "num_base_bdevs_discovered": 2, 00:12:35.635 "num_base_bdevs_operational": 2, 00:12:35.635 "base_bdevs_list": [ 00:12:35.635 { 00:12:35.635 "name": "BaseBdev1", 00:12:35.635 "uuid": "456efeb0-999f-5032-82c8-569dd105e6a7", 00:12:35.635 "is_configured": true, 00:12:35.635 "data_offset": 2048, 00:12:35.635 "data_size": 63488 00:12:35.635 }, 00:12:35.635 { 00:12:35.635 "name": "BaseBdev2", 00:12:35.635 "uuid": "107c2ff7-7bfe-50aa-a622-eda59de08f42", 00:12:35.635 "is_configured": true, 00:12:35.635 "data_offset": 2048, 00:12:35.635 "data_size": 63488 00:12:35.635 } 00:12:35.635 ] 00:12:35.635 }' 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.635 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.205 09:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:36.205 09:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:36.205 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.205 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.205 [2024-10-15 09:11:53.946268] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:36.205 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.205 09:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:36.205 09:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.205 09:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:36.205 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.205 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.205 09:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.205 09:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:36.205 09:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:36.205 09:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:36.205 09:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:36.205 09:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:36.205 09:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:36.205 09:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:36.205 09:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:36.205 09:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:36.205 09:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:36.205 09:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:36.205 09:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:36.205 09:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:36.205 09:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:36.464 [2024-10-15 09:11:54.221481] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:36.464 /dev/nbd0 00:12:36.464 09:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:36.464 09:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:36.464 09:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:36.464 09:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:36.464 09:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:36.464 09:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:36.464 09:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:36.464 09:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:36.464 09:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:36.464 09:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:36.464 09:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:36.464 1+0 records in 00:12:36.464 1+0 records out 00:12:36.464 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000538479 s, 7.6 MB/s 00:12:36.464 09:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.464 09:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:36.464 09:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.464 09:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:36.464 09:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:36.464 09:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:36.464 09:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:36.464 09:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:36.464 09:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:36.464 09:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:41.741 63488+0 records in 00:12:41.741 63488+0 records out 00:12:41.741 32505856 bytes (33 MB, 31 MiB) copied, 4.50259 s, 7.2 MB/s 00:12:41.741 09:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:41.741 09:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:41.741 09:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:41.741 09:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:41.741 09:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:41.741 09:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:41.741 09:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:41.741 [2024-10-15 09:11:59.024702] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.741 09:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:41.741 09:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:41.741 09:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:41.741 09:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:41.741 09:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:41.741 09:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:41.741 09:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:41.741 09:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:41.741 09:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:41.741 09:11:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.741 09:11:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.741 [2024-10-15 09:11:59.060775] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:41.741 09:11:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.741 09:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:41.741 09:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.741 09:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.741 09:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.741 09:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.741 09:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:41.741 09:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.742 09:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.742 09:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.742 09:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.742 09:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.742 09:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.742 09:11:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.742 09:11:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.742 09:11:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.742 09:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.742 "name": "raid_bdev1", 00:12:41.742 "uuid": "0704c0d3-01a0-45e6-970c-f7b4240205a2", 00:12:41.742 "strip_size_kb": 0, 00:12:41.742 "state": "online", 00:12:41.742 "raid_level": "raid1", 00:12:41.742 "superblock": true, 00:12:41.742 "num_base_bdevs": 2, 00:12:41.742 "num_base_bdevs_discovered": 1, 00:12:41.742 "num_base_bdevs_operational": 1, 00:12:41.742 "base_bdevs_list": [ 00:12:41.742 { 00:12:41.742 "name": null, 00:12:41.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.742 "is_configured": false, 00:12:41.742 "data_offset": 0, 00:12:41.742 "data_size": 63488 00:12:41.742 }, 00:12:41.742 { 00:12:41.742 "name": "BaseBdev2", 00:12:41.742 "uuid": "107c2ff7-7bfe-50aa-a622-eda59de08f42", 00:12:41.742 "is_configured": true, 00:12:41.742 "data_offset": 2048, 00:12:41.742 "data_size": 63488 00:12:41.742 } 00:12:41.742 ] 00:12:41.742 }' 00:12:41.742 09:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.742 09:11:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.742 09:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:41.742 09:11:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.742 09:11:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.742 [2024-10-15 09:11:59.520027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:41.742 [2024-10-15 09:11:59.537962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:41.742 09:11:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.742 09:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:41.742 [2024-10-15 09:11:59.540318] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:42.680 09:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:42.680 09:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.680 09:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:42.680 09:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:42.680 09:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.680 09:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.680 09:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.680 09:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.680 09:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.680 09:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.939 09:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.939 "name": "raid_bdev1", 00:12:42.939 "uuid": "0704c0d3-01a0-45e6-970c-f7b4240205a2", 00:12:42.939 "strip_size_kb": 0, 00:12:42.939 "state": "online", 00:12:42.939 "raid_level": "raid1", 00:12:42.939 "superblock": true, 00:12:42.939 "num_base_bdevs": 2, 00:12:42.939 "num_base_bdevs_discovered": 2, 00:12:42.939 "num_base_bdevs_operational": 2, 00:12:42.939 "process": { 00:12:42.939 "type": "rebuild", 00:12:42.939 "target": "spare", 00:12:42.939 "progress": { 00:12:42.939 "blocks": 20480, 00:12:42.939 "percent": 32 00:12:42.939 } 00:12:42.939 }, 00:12:42.939 "base_bdevs_list": [ 00:12:42.939 { 00:12:42.939 "name": "spare", 00:12:42.939 "uuid": "b56ae667-ea32-5882-8d17-a5944754881d", 00:12:42.939 "is_configured": true, 00:12:42.939 "data_offset": 2048, 00:12:42.939 "data_size": 63488 00:12:42.939 }, 00:12:42.939 { 00:12:42.939 "name": "BaseBdev2", 00:12:42.939 "uuid": "107c2ff7-7bfe-50aa-a622-eda59de08f42", 00:12:42.939 "is_configured": true, 00:12:42.939 "data_offset": 2048, 00:12:42.939 "data_size": 63488 00:12:42.939 } 00:12:42.939 ] 00:12:42.939 }' 00:12:42.939 09:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.939 09:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:42.939 09:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.939 09:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:42.939 09:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:42.939 09:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.939 09:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.939 [2024-10-15 09:12:00.675741] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:42.939 [2024-10-15 09:12:00.746868] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:42.939 [2024-10-15 09:12:00.746955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.939 [2024-10-15 09:12:00.746971] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:42.939 [2024-10-15 09:12:00.746984] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:42.939 09:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.939 09:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:42.940 09:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:42.940 09:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.940 09:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.940 09:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.940 09:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:42.940 09:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.940 09:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.940 09:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.940 09:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.940 09:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.940 09:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.940 09:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.940 09:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.940 09:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.198 09:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.198 "name": "raid_bdev1", 00:12:43.198 "uuid": "0704c0d3-01a0-45e6-970c-f7b4240205a2", 00:12:43.198 "strip_size_kb": 0, 00:12:43.198 "state": "online", 00:12:43.198 "raid_level": "raid1", 00:12:43.198 "superblock": true, 00:12:43.198 "num_base_bdevs": 2, 00:12:43.198 "num_base_bdevs_discovered": 1, 00:12:43.198 "num_base_bdevs_operational": 1, 00:12:43.198 "base_bdevs_list": [ 00:12:43.198 { 00:12:43.198 "name": null, 00:12:43.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.198 "is_configured": false, 00:12:43.198 "data_offset": 0, 00:12:43.198 "data_size": 63488 00:12:43.198 }, 00:12:43.198 { 00:12:43.198 "name": "BaseBdev2", 00:12:43.198 "uuid": "107c2ff7-7bfe-50aa-a622-eda59de08f42", 00:12:43.198 "is_configured": true, 00:12:43.198 "data_offset": 2048, 00:12:43.198 "data_size": 63488 00:12:43.198 } 00:12:43.198 ] 00:12:43.198 }' 00:12:43.198 09:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.198 09:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.457 09:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:43.457 09:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:43.457 09:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:43.457 09:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:43.457 09:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:43.457 09:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.457 09:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.457 09:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.457 09:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.457 09:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.457 09:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:43.457 "name": "raid_bdev1", 00:12:43.457 "uuid": "0704c0d3-01a0-45e6-970c-f7b4240205a2", 00:12:43.457 "strip_size_kb": 0, 00:12:43.457 "state": "online", 00:12:43.457 "raid_level": "raid1", 00:12:43.457 "superblock": true, 00:12:43.457 "num_base_bdevs": 2, 00:12:43.457 "num_base_bdevs_discovered": 1, 00:12:43.457 "num_base_bdevs_operational": 1, 00:12:43.457 "base_bdevs_list": [ 00:12:43.457 { 00:12:43.457 "name": null, 00:12:43.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.457 "is_configured": false, 00:12:43.457 "data_offset": 0, 00:12:43.457 "data_size": 63488 00:12:43.457 }, 00:12:43.457 { 00:12:43.457 "name": "BaseBdev2", 00:12:43.457 "uuid": "107c2ff7-7bfe-50aa-a622-eda59de08f42", 00:12:43.457 "is_configured": true, 00:12:43.457 "data_offset": 2048, 00:12:43.457 "data_size": 63488 00:12:43.457 } 00:12:43.457 ] 00:12:43.457 }' 00:12:43.457 09:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:43.457 09:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:43.457 09:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:43.716 09:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:43.716 09:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:43.716 09:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.716 09:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.716 [2024-10-15 09:12:01.371748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:43.716 [2024-10-15 09:12:01.389250] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:43.716 09:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.716 09:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:43.716 [2024-10-15 09:12:01.391421] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:44.651 09:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:44.651 09:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:44.651 09:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:44.651 09:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:44.651 09:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:44.651 09:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.651 09:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.651 09:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.651 09:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.651 09:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.651 09:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:44.651 "name": "raid_bdev1", 00:12:44.651 "uuid": "0704c0d3-01a0-45e6-970c-f7b4240205a2", 00:12:44.651 "strip_size_kb": 0, 00:12:44.651 "state": "online", 00:12:44.651 "raid_level": "raid1", 00:12:44.651 "superblock": true, 00:12:44.651 "num_base_bdevs": 2, 00:12:44.651 "num_base_bdevs_discovered": 2, 00:12:44.651 "num_base_bdevs_operational": 2, 00:12:44.651 "process": { 00:12:44.651 "type": "rebuild", 00:12:44.651 "target": "spare", 00:12:44.651 "progress": { 00:12:44.651 "blocks": 20480, 00:12:44.651 "percent": 32 00:12:44.651 } 00:12:44.651 }, 00:12:44.651 "base_bdevs_list": [ 00:12:44.651 { 00:12:44.651 "name": "spare", 00:12:44.651 "uuid": "b56ae667-ea32-5882-8d17-a5944754881d", 00:12:44.651 "is_configured": true, 00:12:44.651 "data_offset": 2048, 00:12:44.651 "data_size": 63488 00:12:44.651 }, 00:12:44.651 { 00:12:44.651 "name": "BaseBdev2", 00:12:44.651 "uuid": "107c2ff7-7bfe-50aa-a622-eda59de08f42", 00:12:44.651 "is_configured": true, 00:12:44.651 "data_offset": 2048, 00:12:44.651 "data_size": 63488 00:12:44.651 } 00:12:44.651 ] 00:12:44.651 }' 00:12:44.651 09:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:44.651 09:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:44.651 09:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:44.651 09:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:44.651 09:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:44.651 09:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:44.651 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:44.651 09:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:44.651 09:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:44.651 09:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:44.651 09:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=406 00:12:44.651 09:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:44.651 09:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:44.651 09:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:44.651 09:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:44.651 09:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:44.651 09:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:44.651 09:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.651 09:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.651 09:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.651 09:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.651 09:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.651 09:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:44.651 "name": "raid_bdev1", 00:12:44.651 "uuid": "0704c0d3-01a0-45e6-970c-f7b4240205a2", 00:12:44.651 "strip_size_kb": 0, 00:12:44.651 "state": "online", 00:12:44.651 "raid_level": "raid1", 00:12:44.651 "superblock": true, 00:12:44.651 "num_base_bdevs": 2, 00:12:44.652 "num_base_bdevs_discovered": 2, 00:12:44.652 "num_base_bdevs_operational": 2, 00:12:44.652 "process": { 00:12:44.652 "type": "rebuild", 00:12:44.652 "target": "spare", 00:12:44.652 "progress": { 00:12:44.652 "blocks": 22528, 00:12:44.652 "percent": 35 00:12:44.652 } 00:12:44.652 }, 00:12:44.652 "base_bdevs_list": [ 00:12:44.652 { 00:12:44.652 "name": "spare", 00:12:44.652 "uuid": "b56ae667-ea32-5882-8d17-a5944754881d", 00:12:44.652 "is_configured": true, 00:12:44.652 "data_offset": 2048, 00:12:44.652 "data_size": 63488 00:12:44.652 }, 00:12:44.652 { 00:12:44.652 "name": "BaseBdev2", 00:12:44.652 "uuid": "107c2ff7-7bfe-50aa-a622-eda59de08f42", 00:12:44.652 "is_configured": true, 00:12:44.652 "data_offset": 2048, 00:12:44.652 "data_size": 63488 00:12:44.652 } 00:12:44.652 ] 00:12:44.652 }' 00:12:44.652 09:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:44.910 09:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:44.910 09:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:44.910 09:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:44.910 09:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:45.848 09:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:45.848 09:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:45.848 09:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.848 09:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:45.848 09:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:45.848 09:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.848 09:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.848 09:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.848 09:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.848 09:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.848 09:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.848 09:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.848 "name": "raid_bdev1", 00:12:45.848 "uuid": "0704c0d3-01a0-45e6-970c-f7b4240205a2", 00:12:45.848 "strip_size_kb": 0, 00:12:45.848 "state": "online", 00:12:45.848 "raid_level": "raid1", 00:12:45.848 "superblock": true, 00:12:45.848 "num_base_bdevs": 2, 00:12:45.848 "num_base_bdevs_discovered": 2, 00:12:45.848 "num_base_bdevs_operational": 2, 00:12:45.848 "process": { 00:12:45.848 "type": "rebuild", 00:12:45.848 "target": "spare", 00:12:45.848 "progress": { 00:12:45.848 "blocks": 45056, 00:12:45.848 "percent": 70 00:12:45.848 } 00:12:45.848 }, 00:12:45.848 "base_bdevs_list": [ 00:12:45.848 { 00:12:45.848 "name": "spare", 00:12:45.848 "uuid": "b56ae667-ea32-5882-8d17-a5944754881d", 00:12:45.848 "is_configured": true, 00:12:45.848 "data_offset": 2048, 00:12:45.848 "data_size": 63488 00:12:45.848 }, 00:12:45.848 { 00:12:45.848 "name": "BaseBdev2", 00:12:45.848 "uuid": "107c2ff7-7bfe-50aa-a622-eda59de08f42", 00:12:45.848 "is_configured": true, 00:12:45.848 "data_offset": 2048, 00:12:45.848 "data_size": 63488 00:12:45.848 } 00:12:45.848 ] 00:12:45.848 }' 00:12:45.848 09:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.848 09:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:45.848 09:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.108 09:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:46.108 09:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:46.677 [2024-10-15 09:12:04.506823] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:46.677 [2024-10-15 09:12:04.506903] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:46.677 [2024-10-15 09:12:04.507026] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.938 09:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:46.938 09:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:46.938 09:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.938 09:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:46.938 09:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:46.938 09:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.938 09:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.938 09:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.938 09:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.938 09:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.938 09:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.938 09:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.938 "name": "raid_bdev1", 00:12:46.938 "uuid": "0704c0d3-01a0-45e6-970c-f7b4240205a2", 00:12:46.938 "strip_size_kb": 0, 00:12:46.938 "state": "online", 00:12:46.938 "raid_level": "raid1", 00:12:46.938 "superblock": true, 00:12:46.938 "num_base_bdevs": 2, 00:12:46.938 "num_base_bdevs_discovered": 2, 00:12:46.938 "num_base_bdevs_operational": 2, 00:12:46.938 "base_bdevs_list": [ 00:12:46.938 { 00:12:46.938 "name": "spare", 00:12:46.938 "uuid": "b56ae667-ea32-5882-8d17-a5944754881d", 00:12:46.938 "is_configured": true, 00:12:46.938 "data_offset": 2048, 00:12:46.938 "data_size": 63488 00:12:46.938 }, 00:12:46.938 { 00:12:46.938 "name": "BaseBdev2", 00:12:46.938 "uuid": "107c2ff7-7bfe-50aa-a622-eda59de08f42", 00:12:46.938 "is_configured": true, 00:12:46.938 "data_offset": 2048, 00:12:46.938 "data_size": 63488 00:12:46.938 } 00:12:46.938 ] 00:12:46.938 }' 00:12:46.938 09:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.198 09:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:47.198 09:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.198 09:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:47.198 09:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:47.198 09:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:47.198 09:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.198 09:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:47.198 09:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:47.198 09:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.198 09:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.198 09:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.198 09:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.198 09:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.198 09:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.198 09:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.198 "name": "raid_bdev1", 00:12:47.198 "uuid": "0704c0d3-01a0-45e6-970c-f7b4240205a2", 00:12:47.198 "strip_size_kb": 0, 00:12:47.198 "state": "online", 00:12:47.198 "raid_level": "raid1", 00:12:47.198 "superblock": true, 00:12:47.198 "num_base_bdevs": 2, 00:12:47.198 "num_base_bdevs_discovered": 2, 00:12:47.198 "num_base_bdevs_operational": 2, 00:12:47.198 "base_bdevs_list": [ 00:12:47.198 { 00:12:47.198 "name": "spare", 00:12:47.198 "uuid": "b56ae667-ea32-5882-8d17-a5944754881d", 00:12:47.198 "is_configured": true, 00:12:47.198 "data_offset": 2048, 00:12:47.198 "data_size": 63488 00:12:47.198 }, 00:12:47.198 { 00:12:47.198 "name": "BaseBdev2", 00:12:47.198 "uuid": "107c2ff7-7bfe-50aa-a622-eda59de08f42", 00:12:47.198 "is_configured": true, 00:12:47.198 "data_offset": 2048, 00:12:47.198 "data_size": 63488 00:12:47.198 } 00:12:47.198 ] 00:12:47.198 }' 00:12:47.198 09:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.198 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:47.198 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.198 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:47.198 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:47.198 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.198 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.198 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.198 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.198 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:47.198 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.198 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.198 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.198 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.198 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.198 09:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.198 09:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.198 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.198 09:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.457 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.457 "name": "raid_bdev1", 00:12:47.457 "uuid": "0704c0d3-01a0-45e6-970c-f7b4240205a2", 00:12:47.457 "strip_size_kb": 0, 00:12:47.457 "state": "online", 00:12:47.457 "raid_level": "raid1", 00:12:47.457 "superblock": true, 00:12:47.457 "num_base_bdevs": 2, 00:12:47.457 "num_base_bdevs_discovered": 2, 00:12:47.457 "num_base_bdevs_operational": 2, 00:12:47.457 "base_bdevs_list": [ 00:12:47.457 { 00:12:47.457 "name": "spare", 00:12:47.457 "uuid": "b56ae667-ea32-5882-8d17-a5944754881d", 00:12:47.457 "is_configured": true, 00:12:47.457 "data_offset": 2048, 00:12:47.457 "data_size": 63488 00:12:47.457 }, 00:12:47.457 { 00:12:47.457 "name": "BaseBdev2", 00:12:47.457 "uuid": "107c2ff7-7bfe-50aa-a622-eda59de08f42", 00:12:47.457 "is_configured": true, 00:12:47.457 "data_offset": 2048, 00:12:47.457 "data_size": 63488 00:12:47.457 } 00:12:47.457 ] 00:12:47.457 }' 00:12:47.457 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.457 09:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.734 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:47.734 09:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.734 09:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.734 [2024-10-15 09:12:05.584076] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:47.734 [2024-10-15 09:12:05.584117] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:47.734 [2024-10-15 09:12:05.584214] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:47.734 [2024-10-15 09:12:05.584300] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:47.734 [2024-10-15 09:12:05.584316] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:47.734 09:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.734 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.734 09:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.734 09:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.734 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:47.734 09:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.993 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:47.993 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:47.993 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:47.993 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:47.993 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:47.993 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:47.993 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:47.993 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:47.993 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:47.993 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:47.993 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:47.993 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:47.993 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:48.252 /dev/nbd0 00:12:48.252 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:48.252 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:48.252 09:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:48.252 09:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:48.252 09:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:48.253 09:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:48.253 09:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:48.253 09:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:48.253 09:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:48.253 09:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:48.253 09:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:48.253 1+0 records in 00:12:48.253 1+0 records out 00:12:48.253 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529156 s, 7.7 MB/s 00:12:48.253 09:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.253 09:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:48.253 09:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.253 09:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:48.253 09:12:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:48.253 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:48.253 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:48.253 09:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:48.511 /dev/nbd1 00:12:48.511 09:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:48.511 09:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:48.511 09:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:48.511 09:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:48.511 09:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:48.511 09:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:48.511 09:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:48.511 09:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:48.511 09:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:48.511 09:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:48.511 09:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:48.511 1+0 records in 00:12:48.511 1+0 records out 00:12:48.511 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000503032 s, 8.1 MB/s 00:12:48.511 09:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.511 09:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:48.511 09:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.511 09:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:48.511 09:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:48.511 09:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:48.511 09:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:48.511 09:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:48.511 09:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:48.511 09:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:48.511 09:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:48.511 09:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:48.511 09:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:48.511 09:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.511 09:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:48.770 09:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:48.770 09:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:48.770 09:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:48.770 09:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:48.770 09:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:48.770 09:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:48.770 09:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:48.770 09:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:48.770 09:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.770 09:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:49.335 09:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:49.335 09:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:49.335 09:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:49.335 09:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.335 09:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.335 09:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:49.335 09:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:49.335 09:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.335 09:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:49.335 09:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:49.335 09:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.335 09:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.335 09:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.335 09:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:49.335 09:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.335 09:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.335 [2024-10-15 09:12:06.970244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:49.335 [2024-10-15 09:12:06.970325] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.335 [2024-10-15 09:12:06.970354] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:49.335 [2024-10-15 09:12:06.970365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.335 [2024-10-15 09:12:06.973149] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.335 [2024-10-15 09:12:06.973197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:49.335 [2024-10-15 09:12:06.973330] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:49.335 [2024-10-15 09:12:06.973406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:49.335 [2024-10-15 09:12:06.973599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:49.335 spare 00:12:49.335 09:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.335 09:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:49.335 09:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.335 09:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.335 [2024-10-15 09:12:07.073564] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:49.335 [2024-10-15 09:12:07.073654] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:49.335 [2024-10-15 09:12:07.074161] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:49.335 [2024-10-15 09:12:07.074446] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:49.335 [2024-10-15 09:12:07.074475] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:49.335 [2024-10-15 09:12:07.074773] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:49.335 09:12:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.335 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:49.335 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.335 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.335 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.335 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.335 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:49.335 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.335 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.335 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.335 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.335 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.335 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.335 09:12:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.335 09:12:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.335 09:12:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.335 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.335 "name": "raid_bdev1", 00:12:49.335 "uuid": "0704c0d3-01a0-45e6-970c-f7b4240205a2", 00:12:49.335 "strip_size_kb": 0, 00:12:49.335 "state": "online", 00:12:49.335 "raid_level": "raid1", 00:12:49.335 "superblock": true, 00:12:49.335 "num_base_bdevs": 2, 00:12:49.335 "num_base_bdevs_discovered": 2, 00:12:49.335 "num_base_bdevs_operational": 2, 00:12:49.335 "base_bdevs_list": [ 00:12:49.335 { 00:12:49.335 "name": "spare", 00:12:49.335 "uuid": "b56ae667-ea32-5882-8d17-a5944754881d", 00:12:49.335 "is_configured": true, 00:12:49.335 "data_offset": 2048, 00:12:49.335 "data_size": 63488 00:12:49.335 }, 00:12:49.335 { 00:12:49.335 "name": "BaseBdev2", 00:12:49.335 "uuid": "107c2ff7-7bfe-50aa-a622-eda59de08f42", 00:12:49.335 "is_configured": true, 00:12:49.335 "data_offset": 2048, 00:12:49.335 "data_size": 63488 00:12:49.335 } 00:12:49.335 ] 00:12:49.335 }' 00:12:49.335 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.335 09:12:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.593 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:49.593 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.593 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:49.593 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:49.593 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.593 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.593 09:12:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.593 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.593 09:12:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.593 09:12:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.850 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.850 "name": "raid_bdev1", 00:12:49.850 "uuid": "0704c0d3-01a0-45e6-970c-f7b4240205a2", 00:12:49.850 "strip_size_kb": 0, 00:12:49.850 "state": "online", 00:12:49.850 "raid_level": "raid1", 00:12:49.850 "superblock": true, 00:12:49.850 "num_base_bdevs": 2, 00:12:49.850 "num_base_bdevs_discovered": 2, 00:12:49.850 "num_base_bdevs_operational": 2, 00:12:49.850 "base_bdevs_list": [ 00:12:49.850 { 00:12:49.850 "name": "spare", 00:12:49.850 "uuid": "b56ae667-ea32-5882-8d17-a5944754881d", 00:12:49.850 "is_configured": true, 00:12:49.850 "data_offset": 2048, 00:12:49.850 "data_size": 63488 00:12:49.850 }, 00:12:49.850 { 00:12:49.850 "name": "BaseBdev2", 00:12:49.850 "uuid": "107c2ff7-7bfe-50aa-a622-eda59de08f42", 00:12:49.850 "is_configured": true, 00:12:49.850 "data_offset": 2048, 00:12:49.850 "data_size": 63488 00:12:49.851 } 00:12:49.851 ] 00:12:49.851 }' 00:12:49.851 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.851 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:49.851 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.851 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:49.851 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.851 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:49.851 09:12:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.851 09:12:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.851 09:12:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.851 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:49.851 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:49.851 09:12:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.851 09:12:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.851 [2024-10-15 09:12:07.629916] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:49.851 09:12:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.851 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:49.851 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.851 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.851 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.851 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.851 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:49.851 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.851 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.851 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.851 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.851 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.851 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.851 09:12:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.851 09:12:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.851 09:12:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.851 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.851 "name": "raid_bdev1", 00:12:49.851 "uuid": "0704c0d3-01a0-45e6-970c-f7b4240205a2", 00:12:49.851 "strip_size_kb": 0, 00:12:49.851 "state": "online", 00:12:49.851 "raid_level": "raid1", 00:12:49.851 "superblock": true, 00:12:49.851 "num_base_bdevs": 2, 00:12:49.851 "num_base_bdevs_discovered": 1, 00:12:49.851 "num_base_bdevs_operational": 1, 00:12:49.851 "base_bdevs_list": [ 00:12:49.851 { 00:12:49.851 "name": null, 00:12:49.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.851 "is_configured": false, 00:12:49.851 "data_offset": 0, 00:12:49.851 "data_size": 63488 00:12:49.851 }, 00:12:49.851 { 00:12:49.851 "name": "BaseBdev2", 00:12:49.851 "uuid": "107c2ff7-7bfe-50aa-a622-eda59de08f42", 00:12:49.851 "is_configured": true, 00:12:49.851 "data_offset": 2048, 00:12:49.851 "data_size": 63488 00:12:49.851 } 00:12:49.851 ] 00:12:49.851 }' 00:12:49.851 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.851 09:12:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.415 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:50.415 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.415 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.415 [2024-10-15 09:12:08.049420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:50.415 [2024-10-15 09:12:08.049764] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:50.415 [2024-10-15 09:12:08.049798] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:50.415 [2024-10-15 09:12:08.049854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:50.416 [2024-10-15 09:12:08.067494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:12:50.416 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.416 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:50.416 [2024-10-15 09:12:08.070011] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:51.348 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:51.348 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.348 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:51.348 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:51.348 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.348 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.348 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.348 09:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.348 09:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.348 09:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.348 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.348 "name": "raid_bdev1", 00:12:51.348 "uuid": "0704c0d3-01a0-45e6-970c-f7b4240205a2", 00:12:51.348 "strip_size_kb": 0, 00:12:51.348 "state": "online", 00:12:51.348 "raid_level": "raid1", 00:12:51.348 "superblock": true, 00:12:51.348 "num_base_bdevs": 2, 00:12:51.348 "num_base_bdevs_discovered": 2, 00:12:51.348 "num_base_bdevs_operational": 2, 00:12:51.348 "process": { 00:12:51.348 "type": "rebuild", 00:12:51.348 "target": "spare", 00:12:51.348 "progress": { 00:12:51.348 "blocks": 20480, 00:12:51.348 "percent": 32 00:12:51.348 } 00:12:51.348 }, 00:12:51.348 "base_bdevs_list": [ 00:12:51.348 { 00:12:51.348 "name": "spare", 00:12:51.348 "uuid": "b56ae667-ea32-5882-8d17-a5944754881d", 00:12:51.348 "is_configured": true, 00:12:51.348 "data_offset": 2048, 00:12:51.348 "data_size": 63488 00:12:51.348 }, 00:12:51.348 { 00:12:51.348 "name": "BaseBdev2", 00:12:51.348 "uuid": "107c2ff7-7bfe-50aa-a622-eda59de08f42", 00:12:51.348 "is_configured": true, 00:12:51.348 "data_offset": 2048, 00:12:51.348 "data_size": 63488 00:12:51.348 } 00:12:51.348 ] 00:12:51.348 }' 00:12:51.348 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.348 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:51.348 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.348 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:51.348 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:51.348 09:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.348 09:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.348 [2024-10-15 09:12:09.193985] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:51.606 [2024-10-15 09:12:09.277350] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:51.606 [2024-10-15 09:12:09.277486] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.606 [2024-10-15 09:12:09.277509] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:51.606 [2024-10-15 09:12:09.277524] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:51.606 09:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.606 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:51.606 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:51.606 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.606 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:51.606 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:51.606 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:51.606 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.606 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.606 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.606 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.606 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.606 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.606 09:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.606 09:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.606 09:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.606 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.606 "name": "raid_bdev1", 00:12:51.606 "uuid": "0704c0d3-01a0-45e6-970c-f7b4240205a2", 00:12:51.606 "strip_size_kb": 0, 00:12:51.606 "state": "online", 00:12:51.606 "raid_level": "raid1", 00:12:51.606 "superblock": true, 00:12:51.606 "num_base_bdevs": 2, 00:12:51.606 "num_base_bdevs_discovered": 1, 00:12:51.606 "num_base_bdevs_operational": 1, 00:12:51.606 "base_bdevs_list": [ 00:12:51.606 { 00:12:51.606 "name": null, 00:12:51.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.606 "is_configured": false, 00:12:51.606 "data_offset": 0, 00:12:51.606 "data_size": 63488 00:12:51.606 }, 00:12:51.606 { 00:12:51.606 "name": "BaseBdev2", 00:12:51.606 "uuid": "107c2ff7-7bfe-50aa-a622-eda59de08f42", 00:12:51.606 "is_configured": true, 00:12:51.606 "data_offset": 2048, 00:12:51.606 "data_size": 63488 00:12:51.606 } 00:12:51.606 ] 00:12:51.606 }' 00:12:51.606 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.606 09:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.866 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:51.866 09:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.866 09:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.866 [2024-10-15 09:12:09.726191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:51.866 [2024-10-15 09:12:09.726286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.866 [2024-10-15 09:12:09.726312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:51.866 [2024-10-15 09:12:09.726325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.866 [2024-10-15 09:12:09.726899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.866 [2024-10-15 09:12:09.726939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:51.866 [2024-10-15 09:12:09.727052] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:51.866 [2024-10-15 09:12:09.727078] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:51.866 [2024-10-15 09:12:09.727089] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:51.866 [2024-10-15 09:12:09.727121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:51.866 [2024-10-15 09:12:09.744267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:12:51.866 spare 00:12:51.866 09:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.866 [2024-10-15 09:12:09.746184] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:51.866 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:53.245 09:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:53.245 09:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.245 09:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:53.245 09:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:53.246 09:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.246 09:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.246 09:12:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.246 09:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.246 09:12:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.246 09:12:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.246 09:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.246 "name": "raid_bdev1", 00:12:53.246 "uuid": "0704c0d3-01a0-45e6-970c-f7b4240205a2", 00:12:53.246 "strip_size_kb": 0, 00:12:53.246 "state": "online", 00:12:53.246 "raid_level": "raid1", 00:12:53.246 "superblock": true, 00:12:53.246 "num_base_bdevs": 2, 00:12:53.246 "num_base_bdevs_discovered": 2, 00:12:53.246 "num_base_bdevs_operational": 2, 00:12:53.246 "process": { 00:12:53.246 "type": "rebuild", 00:12:53.246 "target": "spare", 00:12:53.246 "progress": { 00:12:53.246 "blocks": 20480, 00:12:53.246 "percent": 32 00:12:53.246 } 00:12:53.246 }, 00:12:53.246 "base_bdevs_list": [ 00:12:53.246 { 00:12:53.246 "name": "spare", 00:12:53.246 "uuid": "b56ae667-ea32-5882-8d17-a5944754881d", 00:12:53.246 "is_configured": true, 00:12:53.246 "data_offset": 2048, 00:12:53.246 "data_size": 63488 00:12:53.246 }, 00:12:53.246 { 00:12:53.246 "name": "BaseBdev2", 00:12:53.246 "uuid": "107c2ff7-7bfe-50aa-a622-eda59de08f42", 00:12:53.246 "is_configured": true, 00:12:53.246 "data_offset": 2048, 00:12:53.246 "data_size": 63488 00:12:53.246 } 00:12:53.246 ] 00:12:53.246 }' 00:12:53.246 09:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:53.246 09:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:53.246 09:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:53.246 09:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:53.246 09:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:53.246 09:12:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.246 09:12:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.246 [2024-10-15 09:12:10.921970] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:53.246 [2024-10-15 09:12:10.952336] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:53.246 [2024-10-15 09:12:10.952420] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.246 [2024-10-15 09:12:10.952438] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:53.246 [2024-10-15 09:12:10.952445] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:53.246 09:12:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.246 09:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:53.246 09:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:53.246 09:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.246 09:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:53.246 09:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:53.246 09:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:53.246 09:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.246 09:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.246 09:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.246 09:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.246 09:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.246 09:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.246 09:12:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.246 09:12:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.246 09:12:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.246 09:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.246 "name": "raid_bdev1", 00:12:53.246 "uuid": "0704c0d3-01a0-45e6-970c-f7b4240205a2", 00:12:53.246 "strip_size_kb": 0, 00:12:53.246 "state": "online", 00:12:53.246 "raid_level": "raid1", 00:12:53.246 "superblock": true, 00:12:53.246 "num_base_bdevs": 2, 00:12:53.246 "num_base_bdevs_discovered": 1, 00:12:53.246 "num_base_bdevs_operational": 1, 00:12:53.246 "base_bdevs_list": [ 00:12:53.246 { 00:12:53.246 "name": null, 00:12:53.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.246 "is_configured": false, 00:12:53.246 "data_offset": 0, 00:12:53.246 "data_size": 63488 00:12:53.246 }, 00:12:53.246 { 00:12:53.246 "name": "BaseBdev2", 00:12:53.246 "uuid": "107c2ff7-7bfe-50aa-a622-eda59de08f42", 00:12:53.246 "is_configured": true, 00:12:53.246 "data_offset": 2048, 00:12:53.246 "data_size": 63488 00:12:53.246 } 00:12:53.246 ] 00:12:53.246 }' 00:12:53.246 09:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.246 09:12:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.815 09:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:53.815 09:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.815 09:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:53.815 09:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:53.815 09:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.815 09:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.815 09:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.815 09:12:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.815 09:12:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.815 09:12:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.815 09:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.815 "name": "raid_bdev1", 00:12:53.815 "uuid": "0704c0d3-01a0-45e6-970c-f7b4240205a2", 00:12:53.816 "strip_size_kb": 0, 00:12:53.816 "state": "online", 00:12:53.816 "raid_level": "raid1", 00:12:53.816 "superblock": true, 00:12:53.816 "num_base_bdevs": 2, 00:12:53.816 "num_base_bdevs_discovered": 1, 00:12:53.816 "num_base_bdevs_operational": 1, 00:12:53.816 "base_bdevs_list": [ 00:12:53.816 { 00:12:53.816 "name": null, 00:12:53.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.816 "is_configured": false, 00:12:53.816 "data_offset": 0, 00:12:53.816 "data_size": 63488 00:12:53.816 }, 00:12:53.816 { 00:12:53.816 "name": "BaseBdev2", 00:12:53.816 "uuid": "107c2ff7-7bfe-50aa-a622-eda59de08f42", 00:12:53.816 "is_configured": true, 00:12:53.816 "data_offset": 2048, 00:12:53.816 "data_size": 63488 00:12:53.816 } 00:12:53.816 ] 00:12:53.816 }' 00:12:53.816 09:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:53.816 09:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:53.816 09:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:53.816 09:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:53.816 09:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:53.816 09:12:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.816 09:12:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.816 09:12:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.816 09:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:53.816 09:12:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.816 09:12:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.816 [2024-10-15 09:12:11.605743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:53.816 [2024-10-15 09:12:11.605831] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.816 [2024-10-15 09:12:11.605860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:53.816 [2024-10-15 09:12:11.605881] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.816 [2024-10-15 09:12:11.606467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.816 [2024-10-15 09:12:11.606500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:53.816 [2024-10-15 09:12:11.606613] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:53.816 [2024-10-15 09:12:11.606637] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:53.816 [2024-10-15 09:12:11.606649] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:53.816 [2024-10-15 09:12:11.606662] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:53.816 BaseBdev1 00:12:53.816 09:12:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.816 09:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:54.755 09:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:54.755 09:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.755 09:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.755 09:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.755 09:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.755 09:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:54.755 09:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.755 09:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.755 09:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.755 09:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.755 09:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.755 09:12:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.755 09:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.755 09:12:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.755 09:12:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.012 09:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.012 "name": "raid_bdev1", 00:12:55.012 "uuid": "0704c0d3-01a0-45e6-970c-f7b4240205a2", 00:12:55.012 "strip_size_kb": 0, 00:12:55.012 "state": "online", 00:12:55.012 "raid_level": "raid1", 00:12:55.012 "superblock": true, 00:12:55.012 "num_base_bdevs": 2, 00:12:55.012 "num_base_bdevs_discovered": 1, 00:12:55.012 "num_base_bdevs_operational": 1, 00:12:55.012 "base_bdevs_list": [ 00:12:55.012 { 00:12:55.012 "name": null, 00:12:55.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.012 "is_configured": false, 00:12:55.012 "data_offset": 0, 00:12:55.012 "data_size": 63488 00:12:55.012 }, 00:12:55.012 { 00:12:55.012 "name": "BaseBdev2", 00:12:55.012 "uuid": "107c2ff7-7bfe-50aa-a622-eda59de08f42", 00:12:55.012 "is_configured": true, 00:12:55.012 "data_offset": 2048, 00:12:55.012 "data_size": 63488 00:12:55.012 } 00:12:55.012 ] 00:12:55.012 }' 00:12:55.012 09:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.012 09:12:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.271 09:12:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:55.271 09:12:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:55.271 09:12:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:55.272 09:12:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:55.272 09:12:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:55.272 09:12:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.272 09:12:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.272 09:12:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.272 09:12:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.272 09:12:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.272 09:12:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.272 "name": "raid_bdev1", 00:12:55.272 "uuid": "0704c0d3-01a0-45e6-970c-f7b4240205a2", 00:12:55.272 "strip_size_kb": 0, 00:12:55.272 "state": "online", 00:12:55.272 "raid_level": "raid1", 00:12:55.272 "superblock": true, 00:12:55.272 "num_base_bdevs": 2, 00:12:55.272 "num_base_bdevs_discovered": 1, 00:12:55.272 "num_base_bdevs_operational": 1, 00:12:55.272 "base_bdevs_list": [ 00:12:55.272 { 00:12:55.272 "name": null, 00:12:55.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.272 "is_configured": false, 00:12:55.272 "data_offset": 0, 00:12:55.272 "data_size": 63488 00:12:55.272 }, 00:12:55.272 { 00:12:55.272 "name": "BaseBdev2", 00:12:55.272 "uuid": "107c2ff7-7bfe-50aa-a622-eda59de08f42", 00:12:55.272 "is_configured": true, 00:12:55.272 "data_offset": 2048, 00:12:55.272 "data_size": 63488 00:12:55.272 } 00:12:55.272 ] 00:12:55.272 }' 00:12:55.272 09:12:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.531 09:12:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:55.531 09:12:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:55.531 09:12:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:55.531 09:12:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:55.531 09:12:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:12:55.531 09:12:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:55.531 09:12:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:55.531 09:12:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:55.531 09:12:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:55.531 09:12:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:55.531 09:12:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:55.531 09:12:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.531 09:12:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.531 [2024-10-15 09:12:13.247034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:55.531 [2024-10-15 09:12:13.247210] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:55.531 [2024-10-15 09:12:13.247227] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:55.531 request: 00:12:55.531 { 00:12:55.531 "base_bdev": "BaseBdev1", 00:12:55.531 "raid_bdev": "raid_bdev1", 00:12:55.531 "method": "bdev_raid_add_base_bdev", 00:12:55.531 "req_id": 1 00:12:55.531 } 00:12:55.531 Got JSON-RPC error response 00:12:55.531 response: 00:12:55.531 { 00:12:55.531 "code": -22, 00:12:55.531 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:55.531 } 00:12:55.531 09:12:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:55.531 09:12:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:12:55.531 09:12:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:55.531 09:12:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:55.531 09:12:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:55.531 09:12:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:56.469 09:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:56.469 09:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.469 09:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.469 09:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.469 09:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.469 09:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:56.469 09:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.469 09:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.469 09:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.469 09:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.469 09:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.469 09:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.469 09:12:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.469 09:12:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.469 09:12:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.469 09:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.469 "name": "raid_bdev1", 00:12:56.469 "uuid": "0704c0d3-01a0-45e6-970c-f7b4240205a2", 00:12:56.469 "strip_size_kb": 0, 00:12:56.469 "state": "online", 00:12:56.469 "raid_level": "raid1", 00:12:56.469 "superblock": true, 00:12:56.469 "num_base_bdevs": 2, 00:12:56.469 "num_base_bdevs_discovered": 1, 00:12:56.469 "num_base_bdevs_operational": 1, 00:12:56.469 "base_bdevs_list": [ 00:12:56.469 { 00:12:56.469 "name": null, 00:12:56.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.469 "is_configured": false, 00:12:56.469 "data_offset": 0, 00:12:56.469 "data_size": 63488 00:12:56.469 }, 00:12:56.469 { 00:12:56.469 "name": "BaseBdev2", 00:12:56.469 "uuid": "107c2ff7-7bfe-50aa-a622-eda59de08f42", 00:12:56.469 "is_configured": true, 00:12:56.469 "data_offset": 2048, 00:12:56.469 "data_size": 63488 00:12:56.469 } 00:12:56.469 ] 00:12:56.469 }' 00:12:56.469 09:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.469 09:12:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.039 09:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:57.039 09:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.039 09:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:57.039 09:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:57.039 09:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.040 09:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.040 09:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.040 09:12:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.040 09:12:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.040 09:12:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.040 09:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.040 "name": "raid_bdev1", 00:12:57.040 "uuid": "0704c0d3-01a0-45e6-970c-f7b4240205a2", 00:12:57.040 "strip_size_kb": 0, 00:12:57.040 "state": "online", 00:12:57.040 "raid_level": "raid1", 00:12:57.040 "superblock": true, 00:12:57.040 "num_base_bdevs": 2, 00:12:57.040 "num_base_bdevs_discovered": 1, 00:12:57.040 "num_base_bdevs_operational": 1, 00:12:57.040 "base_bdevs_list": [ 00:12:57.040 { 00:12:57.040 "name": null, 00:12:57.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.040 "is_configured": false, 00:12:57.040 "data_offset": 0, 00:12:57.040 "data_size": 63488 00:12:57.040 }, 00:12:57.040 { 00:12:57.040 "name": "BaseBdev2", 00:12:57.040 "uuid": "107c2ff7-7bfe-50aa-a622-eda59de08f42", 00:12:57.040 "is_configured": true, 00:12:57.040 "data_offset": 2048, 00:12:57.040 "data_size": 63488 00:12:57.040 } 00:12:57.040 ] 00:12:57.040 }' 00:12:57.040 09:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.040 09:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:57.040 09:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.040 09:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:57.040 09:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75869 00:12:57.040 09:12:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 75869 ']' 00:12:57.040 09:12:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 75869 00:12:57.040 09:12:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:57.040 09:12:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:57.040 09:12:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75869 00:12:57.040 09:12:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:57.040 09:12:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:57.040 killing process with pid 75869 00:12:57.040 09:12:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75869' 00:12:57.040 09:12:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 75869 00:12:57.040 Received shutdown signal, test time was about 60.000000 seconds 00:12:57.040 00:12:57.040 Latency(us) 00:12:57.040 [2024-10-15T09:12:14.936Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:57.040 [2024-10-15T09:12:14.936Z] =================================================================================================================== 00:12:57.040 [2024-10-15T09:12:14.936Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:57.040 09:12:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 75869 00:12:57.040 [2024-10-15 09:12:14.889852] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:57.040 [2024-10-15 09:12:14.890017] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:57.040 [2024-10-15 09:12:14.890085] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:57.040 [2024-10-15 09:12:14.890098] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:12:57.610 [2024-10-15 09:12:15.218257] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:58.548 00:12:58.548 real 0m24.122s 00:12:58.548 user 0m29.236s 00:12:58.548 sys 0m3.951s 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.548 ************************************ 00:12:58.548 END TEST raid_rebuild_test_sb 00:12:58.548 ************************************ 00:12:58.548 09:12:16 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:12:58.548 09:12:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:58.548 09:12:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:58.548 09:12:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:58.548 ************************************ 00:12:58.548 START TEST raid_rebuild_test_io 00:12:58.548 ************************************ 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76606 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76606 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 76606 ']' 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:58.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.548 09:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:58.806 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:58.806 Zero copy mechanism will not be used. 00:12:58.806 [2024-10-15 09:12:16.537941] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:12:58.806 [2024-10-15 09:12:16.538104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76606 ] 00:12:59.066 [2024-10-15 09:12:16.709575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.066 [2024-10-15 09:12:16.832242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.325 [2024-10-15 09:12:17.041908] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:59.325 [2024-10-15 09:12:17.041961] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:59.583 09:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:59.583 09:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:12:59.583 09:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:59.583 09:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:59.583 09:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.583 09:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.583 BaseBdev1_malloc 00:12:59.583 09:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.583 09:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:59.583 09:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.583 09:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.583 [2024-10-15 09:12:17.451927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:59.583 [2024-10-15 09:12:17.452016] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.583 [2024-10-15 09:12:17.452045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:59.583 [2024-10-15 09:12:17.452059] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.583 [2024-10-15 09:12:17.454553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.583 [2024-10-15 09:12:17.454597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:59.583 BaseBdev1 00:12:59.583 09:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.583 09:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:59.583 09:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:59.583 09:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.583 09:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.842 BaseBdev2_malloc 00:12:59.842 09:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.842 09:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:59.842 09:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.842 09:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.842 [2024-10-15 09:12:17.510965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:59.842 [2024-10-15 09:12:17.511061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.842 [2024-10-15 09:12:17.511088] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:59.842 [2024-10-15 09:12:17.511103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.842 [2024-10-15 09:12:17.513651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.842 [2024-10-15 09:12:17.513716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:59.842 BaseBdev2 00:12:59.842 09:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.843 09:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:59.843 09:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.843 09:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.843 spare_malloc 00:12:59.843 09:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.843 09:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:59.843 09:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.843 09:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.843 spare_delay 00:12:59.843 09:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.843 09:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:59.843 09:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.843 09:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.843 [2024-10-15 09:12:17.599859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:59.843 [2024-10-15 09:12:17.599936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.843 [2024-10-15 09:12:17.599961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:59.843 [2024-10-15 09:12:17.599974] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.843 [2024-10-15 09:12:17.602413] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.843 [2024-10-15 09:12:17.602457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:59.843 spare 00:12:59.843 09:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.843 09:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:59.843 09:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.843 09:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.843 [2024-10-15 09:12:17.607891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:59.843 [2024-10-15 09:12:17.609979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:59.843 [2024-10-15 09:12:17.610084] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:59.843 [2024-10-15 09:12:17.610101] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:59.843 [2024-10-15 09:12:17.610412] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:59.843 [2024-10-15 09:12:17.610593] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:59.843 [2024-10-15 09:12:17.610616] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:59.843 [2024-10-15 09:12:17.610815] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.843 09:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.843 09:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:59.843 09:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.843 09:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.843 09:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.843 09:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.843 09:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:59.843 09:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.843 09:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.843 09:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.843 09:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.843 09:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.843 09:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.843 09:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.843 09:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.843 09:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.843 09:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.843 "name": "raid_bdev1", 00:12:59.843 "uuid": "42edaddb-b9f6-4584-b4d6-c2456991b98c", 00:12:59.843 "strip_size_kb": 0, 00:12:59.843 "state": "online", 00:12:59.843 "raid_level": "raid1", 00:12:59.843 "superblock": false, 00:12:59.843 "num_base_bdevs": 2, 00:12:59.843 "num_base_bdevs_discovered": 2, 00:12:59.843 "num_base_bdevs_operational": 2, 00:12:59.843 "base_bdevs_list": [ 00:12:59.843 { 00:12:59.843 "name": "BaseBdev1", 00:12:59.843 "uuid": "cc9c3c18-ad18-5389-b9da-e39d8b3a8810", 00:12:59.843 "is_configured": true, 00:12:59.843 "data_offset": 0, 00:12:59.843 "data_size": 65536 00:12:59.843 }, 00:12:59.843 { 00:12:59.843 "name": "BaseBdev2", 00:12:59.843 "uuid": "8c599626-9091-5531-a336-3682bf51e072", 00:12:59.843 "is_configured": true, 00:12:59.843 "data_offset": 0, 00:12:59.843 "data_size": 65536 00:12:59.843 } 00:12:59.843 ] 00:12:59.843 }' 00:12:59.843 09:12:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.843 09:12:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.411 09:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:00.411 09:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:00.411 09:12:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.411 09:12:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.411 [2024-10-15 09:12:18.115421] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:00.411 09:12:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.411 09:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:00.411 09:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.411 09:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:00.411 09:12:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.411 09:12:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.411 09:12:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.411 09:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:00.411 09:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:00.411 09:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:00.411 09:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:00.411 09:12:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.411 09:12:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.411 [2024-10-15 09:12:18.246872] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:00.411 09:12:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.411 09:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:00.411 09:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.411 09:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.411 09:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.411 09:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.411 09:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:00.411 09:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.411 09:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.411 09:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.411 09:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.411 09:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.411 09:12:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.411 09:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.411 09:12:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.411 09:12:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.671 09:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.671 "name": "raid_bdev1", 00:13:00.671 "uuid": "42edaddb-b9f6-4584-b4d6-c2456991b98c", 00:13:00.671 "strip_size_kb": 0, 00:13:00.671 "state": "online", 00:13:00.671 "raid_level": "raid1", 00:13:00.671 "superblock": false, 00:13:00.671 "num_base_bdevs": 2, 00:13:00.671 "num_base_bdevs_discovered": 1, 00:13:00.671 "num_base_bdevs_operational": 1, 00:13:00.671 "base_bdevs_list": [ 00:13:00.671 { 00:13:00.671 "name": null, 00:13:00.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.671 "is_configured": false, 00:13:00.671 "data_offset": 0, 00:13:00.671 "data_size": 65536 00:13:00.671 }, 00:13:00.671 { 00:13:00.671 "name": "BaseBdev2", 00:13:00.671 "uuid": "8c599626-9091-5531-a336-3682bf51e072", 00:13:00.671 "is_configured": true, 00:13:00.671 "data_offset": 0, 00:13:00.671 "data_size": 65536 00:13:00.671 } 00:13:00.671 ] 00:13:00.671 }' 00:13:00.671 09:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.671 09:12:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.671 [2024-10-15 09:12:18.361183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:00.671 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:00.671 Zero copy mechanism will not be used. 00:13:00.671 Running I/O for 60 seconds... 00:13:00.930 09:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:00.930 09:12:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.930 09:12:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.930 [2024-10-15 09:12:18.778471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:01.189 09:12:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.189 09:12:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:01.189 [2024-10-15 09:12:18.857218] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:01.189 [2024-10-15 09:12:18.859497] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:01.189 [2024-10-15 09:12:18.976738] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:01.189 [2024-10-15 09:12:18.977406] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:01.448 [2024-10-15 09:12:19.087907] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:01.448 [2024-10-15 09:12:19.088288] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:01.707 149.00 IOPS, 447.00 MiB/s [2024-10-15T09:12:19.603Z] [2024-10-15 09:12:19.411105] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:01.966 [2024-10-15 09:12:19.669467] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:01.966 09:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:01.966 09:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.966 09:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:01.966 09:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:01.966 09:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.966 09:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.966 09:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.966 09:12:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.966 09:12:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.225 09:12:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.225 09:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.225 "name": "raid_bdev1", 00:13:02.225 "uuid": "42edaddb-b9f6-4584-b4d6-c2456991b98c", 00:13:02.225 "strip_size_kb": 0, 00:13:02.225 "state": "online", 00:13:02.225 "raid_level": "raid1", 00:13:02.225 "superblock": false, 00:13:02.225 "num_base_bdevs": 2, 00:13:02.225 "num_base_bdevs_discovered": 2, 00:13:02.225 "num_base_bdevs_operational": 2, 00:13:02.225 "process": { 00:13:02.225 "type": "rebuild", 00:13:02.225 "target": "spare", 00:13:02.225 "progress": { 00:13:02.225 "blocks": 10240, 00:13:02.225 "percent": 15 00:13:02.225 } 00:13:02.225 }, 00:13:02.225 "base_bdevs_list": [ 00:13:02.225 { 00:13:02.225 "name": "spare", 00:13:02.225 "uuid": "e6cee4fe-c849-5bf3-bf2a-6a6c7784c4a0", 00:13:02.225 "is_configured": true, 00:13:02.225 "data_offset": 0, 00:13:02.225 "data_size": 65536 00:13:02.225 }, 00:13:02.225 { 00:13:02.225 "name": "BaseBdev2", 00:13:02.225 "uuid": "8c599626-9091-5531-a336-3682bf51e072", 00:13:02.225 "is_configured": true, 00:13:02.225 "data_offset": 0, 00:13:02.225 "data_size": 65536 00:13:02.225 } 00:13:02.225 ] 00:13:02.225 }' 00:13:02.225 09:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.225 09:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.225 09:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.225 09:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.225 09:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:02.225 09:12:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.225 09:12:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.225 [2024-10-15 09:12:20.005811] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:02.225 [2024-10-15 09:12:20.010500] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:02.225 [2024-10-15 09:12:20.111651] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:02.485 [2024-10-15 09:12:20.121441] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.485 [2024-10-15 09:12:20.121506] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:02.485 [2024-10-15 09:12:20.121522] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:02.485 [2024-10-15 09:12:20.173894] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:02.485 09:12:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.485 09:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:02.485 09:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.485 09:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.485 09:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.485 09:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.485 09:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:02.485 09:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.485 09:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.485 09:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.485 09:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.485 09:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.485 09:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.485 09:12:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.485 09:12:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.485 09:12:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.485 09:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.485 "name": "raid_bdev1", 00:13:02.485 "uuid": "42edaddb-b9f6-4584-b4d6-c2456991b98c", 00:13:02.485 "strip_size_kb": 0, 00:13:02.485 "state": "online", 00:13:02.485 "raid_level": "raid1", 00:13:02.485 "superblock": false, 00:13:02.485 "num_base_bdevs": 2, 00:13:02.485 "num_base_bdevs_discovered": 1, 00:13:02.485 "num_base_bdevs_operational": 1, 00:13:02.485 "base_bdevs_list": [ 00:13:02.485 { 00:13:02.485 "name": null, 00:13:02.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.485 "is_configured": false, 00:13:02.485 "data_offset": 0, 00:13:02.485 "data_size": 65536 00:13:02.485 }, 00:13:02.485 { 00:13:02.485 "name": "BaseBdev2", 00:13:02.485 "uuid": "8c599626-9091-5531-a336-3682bf51e072", 00:13:02.485 "is_configured": true, 00:13:02.485 "data_offset": 0, 00:13:02.485 "data_size": 65536 00:13:02.485 } 00:13:02.485 ] 00:13:02.485 }' 00:13:02.485 09:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.485 09:12:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.053 129.50 IOPS, 388.50 MiB/s [2024-10-15T09:12:20.949Z] 09:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:03.053 09:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.053 09:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:03.053 09:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:03.053 09:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.053 09:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.053 09:12:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.053 09:12:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.053 09:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.053 09:12:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.053 09:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.053 "name": "raid_bdev1", 00:13:03.053 "uuid": "42edaddb-b9f6-4584-b4d6-c2456991b98c", 00:13:03.053 "strip_size_kb": 0, 00:13:03.053 "state": "online", 00:13:03.053 "raid_level": "raid1", 00:13:03.053 "superblock": false, 00:13:03.053 "num_base_bdevs": 2, 00:13:03.053 "num_base_bdevs_discovered": 1, 00:13:03.053 "num_base_bdevs_operational": 1, 00:13:03.053 "base_bdevs_list": [ 00:13:03.053 { 00:13:03.053 "name": null, 00:13:03.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.053 "is_configured": false, 00:13:03.053 "data_offset": 0, 00:13:03.053 "data_size": 65536 00:13:03.053 }, 00:13:03.053 { 00:13:03.053 "name": "BaseBdev2", 00:13:03.053 "uuid": "8c599626-9091-5531-a336-3682bf51e072", 00:13:03.053 "is_configured": true, 00:13:03.053 "data_offset": 0, 00:13:03.053 "data_size": 65536 00:13:03.053 } 00:13:03.053 ] 00:13:03.053 }' 00:13:03.053 09:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.053 09:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:03.053 09:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.053 09:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:03.053 09:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:03.053 09:12:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.053 09:12:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.053 [2024-10-15 09:12:20.815452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:03.053 09:12:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.053 09:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:03.053 [2024-10-15 09:12:20.890422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:03.053 [2024-10-15 09:12:20.892664] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:03.313 [2024-10-15 09:12:21.033827] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:03.313 [2024-10-15 09:12:21.152877] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:03.313 [2024-10-15 09:12:21.153249] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:03.571 160.67 IOPS, 482.00 MiB/s [2024-10-15T09:12:21.467Z] [2024-10-15 09:12:21.450439] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:03.830 [2024-10-15 09:12:21.576792] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:04.090 09:12:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:04.090 09:12:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.090 09:12:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:04.090 09:12:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:04.090 09:12:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.090 09:12:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.090 09:12:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.090 09:12:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.090 09:12:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.090 09:12:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.090 09:12:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.090 "name": "raid_bdev1", 00:13:04.090 "uuid": "42edaddb-b9f6-4584-b4d6-c2456991b98c", 00:13:04.090 "strip_size_kb": 0, 00:13:04.090 "state": "online", 00:13:04.090 "raid_level": "raid1", 00:13:04.090 "superblock": false, 00:13:04.090 "num_base_bdevs": 2, 00:13:04.090 "num_base_bdevs_discovered": 2, 00:13:04.090 "num_base_bdevs_operational": 2, 00:13:04.090 "process": { 00:13:04.090 "type": "rebuild", 00:13:04.090 "target": "spare", 00:13:04.090 "progress": { 00:13:04.090 "blocks": 14336, 00:13:04.090 "percent": 21 00:13:04.090 } 00:13:04.090 }, 00:13:04.090 "base_bdevs_list": [ 00:13:04.090 { 00:13:04.090 "name": "spare", 00:13:04.090 "uuid": "e6cee4fe-c849-5bf3-bf2a-6a6c7784c4a0", 00:13:04.090 "is_configured": true, 00:13:04.090 "data_offset": 0, 00:13:04.090 "data_size": 65536 00:13:04.090 }, 00:13:04.090 { 00:13:04.090 "name": "BaseBdev2", 00:13:04.090 "uuid": "8c599626-9091-5531-a336-3682bf51e072", 00:13:04.090 "is_configured": true, 00:13:04.090 "data_offset": 0, 00:13:04.090 "data_size": 65536 00:13:04.090 } 00:13:04.090 ] 00:13:04.090 }' 00:13:04.090 09:12:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.090 [2024-10-15 09:12:21.936746] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:04.090 09:12:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:04.090 09:12:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.349 09:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:04.349 09:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:04.350 09:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:04.350 09:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:04.350 09:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:04.350 09:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=426 00:13:04.350 09:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:04.350 09:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:04.350 09:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.350 09:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:04.350 09:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:04.350 09:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.350 09:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.350 09:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.350 09:12:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.350 09:12:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.350 09:12:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.350 09:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.350 "name": "raid_bdev1", 00:13:04.350 "uuid": "42edaddb-b9f6-4584-b4d6-c2456991b98c", 00:13:04.350 "strip_size_kb": 0, 00:13:04.350 "state": "online", 00:13:04.350 "raid_level": "raid1", 00:13:04.350 "superblock": false, 00:13:04.350 "num_base_bdevs": 2, 00:13:04.350 "num_base_bdevs_discovered": 2, 00:13:04.350 "num_base_bdevs_operational": 2, 00:13:04.350 "process": { 00:13:04.350 "type": "rebuild", 00:13:04.350 "target": "spare", 00:13:04.350 "progress": { 00:13:04.350 "blocks": 16384, 00:13:04.350 "percent": 25 00:13:04.350 } 00:13:04.350 }, 00:13:04.350 "base_bdevs_list": [ 00:13:04.350 { 00:13:04.350 "name": "spare", 00:13:04.350 "uuid": "e6cee4fe-c849-5bf3-bf2a-6a6c7784c4a0", 00:13:04.350 "is_configured": true, 00:13:04.350 "data_offset": 0, 00:13:04.350 "data_size": 65536 00:13:04.350 }, 00:13:04.350 { 00:13:04.350 "name": "BaseBdev2", 00:13:04.350 "uuid": "8c599626-9091-5531-a336-3682bf51e072", 00:13:04.350 "is_configured": true, 00:13:04.350 "data_offset": 0, 00:13:04.350 "data_size": 65536 00:13:04.350 } 00:13:04.350 ] 00:13:04.350 }' 00:13:04.350 09:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.350 09:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:04.350 09:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.350 09:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:04.350 09:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:04.609 [2024-10-15 09:12:22.303492] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:04.609 [2024-10-15 09:12:22.304120] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:04.609 148.50 IOPS, 445.50 MiB/s [2024-10-15T09:12:22.505Z] [2024-10-15 09:12:22.446494] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:05.176 [2024-10-15 09:12:22.921663] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:05.436 09:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:05.437 09:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:05.437 09:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.437 09:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:05.437 09:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:05.437 09:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.437 09:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.437 09:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.437 09:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.437 09:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.437 09:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.437 09:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.437 "name": "raid_bdev1", 00:13:05.437 "uuid": "42edaddb-b9f6-4584-b4d6-c2456991b98c", 00:13:05.437 "strip_size_kb": 0, 00:13:05.437 "state": "online", 00:13:05.437 "raid_level": "raid1", 00:13:05.437 "superblock": false, 00:13:05.437 "num_base_bdevs": 2, 00:13:05.437 "num_base_bdevs_discovered": 2, 00:13:05.437 "num_base_bdevs_operational": 2, 00:13:05.437 "process": { 00:13:05.437 "type": "rebuild", 00:13:05.437 "target": "spare", 00:13:05.437 "progress": { 00:13:05.437 "blocks": 28672, 00:13:05.437 "percent": 43 00:13:05.437 } 00:13:05.437 }, 00:13:05.437 "base_bdevs_list": [ 00:13:05.437 { 00:13:05.437 "name": "spare", 00:13:05.437 "uuid": "e6cee4fe-c849-5bf3-bf2a-6a6c7784c4a0", 00:13:05.437 "is_configured": true, 00:13:05.437 "data_offset": 0, 00:13:05.437 "data_size": 65536 00:13:05.437 }, 00:13:05.437 { 00:13:05.437 "name": "BaseBdev2", 00:13:05.437 "uuid": "8c599626-9091-5531-a336-3682bf51e072", 00:13:05.437 "is_configured": true, 00:13:05.437 "data_offset": 0, 00:13:05.437 "data_size": 65536 00:13:05.437 } 00:13:05.437 ] 00:13:05.437 }' 00:13:05.437 09:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.437 09:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:05.437 09:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.437 [2024-10-15 09:12:23.264066] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:05.437 09:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:05.437 09:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:05.696 130.40 IOPS, 391.20 MiB/s [2024-10-15T09:12:23.592Z] [2024-10-15 09:12:23.467470] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:05.956 [2024-10-15 09:12:23.683990] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:05.956 [2024-10-15 09:12:23.800062] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:06.525 [2024-10-15 09:12:24.142305] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:06.525 09:12:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:06.525 09:12:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:06.525 09:12:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.525 09:12:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:06.525 09:12:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:06.525 09:12:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.525 09:12:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.525 09:12:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.525 09:12:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.525 09:12:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.525 09:12:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.525 09:12:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.525 "name": "raid_bdev1", 00:13:06.525 "uuid": "42edaddb-b9f6-4584-b4d6-c2456991b98c", 00:13:06.525 "strip_size_kb": 0, 00:13:06.525 "state": "online", 00:13:06.525 "raid_level": "raid1", 00:13:06.525 "superblock": false, 00:13:06.525 "num_base_bdevs": 2, 00:13:06.525 "num_base_bdevs_discovered": 2, 00:13:06.525 "num_base_bdevs_operational": 2, 00:13:06.525 "process": { 00:13:06.525 "type": "rebuild", 00:13:06.525 "target": "spare", 00:13:06.525 "progress": { 00:13:06.525 "blocks": 47104, 00:13:06.525 "percent": 71 00:13:06.525 } 00:13:06.525 }, 00:13:06.525 "base_bdevs_list": [ 00:13:06.525 { 00:13:06.525 "name": "spare", 00:13:06.525 "uuid": "e6cee4fe-c849-5bf3-bf2a-6a6c7784c4a0", 00:13:06.525 "is_configured": true, 00:13:06.525 "data_offset": 0, 00:13:06.525 "data_size": 65536 00:13:06.525 }, 00:13:06.525 { 00:13:06.525 "name": "BaseBdev2", 00:13:06.525 "uuid": "8c599626-9091-5531-a336-3682bf51e072", 00:13:06.525 "is_configured": true, 00:13:06.525 "data_offset": 0, 00:13:06.525 "data_size": 65536 00:13:06.525 } 00:13:06.525 ] 00:13:06.525 }' 00:13:06.525 09:12:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.525 09:12:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:06.525 09:12:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.525 117.33 IOPS, 352.00 MiB/s [2024-10-15T09:12:24.421Z] 09:12:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:06.525 09:12:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:06.784 [2024-10-15 09:12:24.482262] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:07.044 [2024-10-15 09:12:24.698649] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:07.612 105.57 IOPS, 316.71 MiB/s [2024-10-15T09:12:25.508Z] 09:12:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:07.612 09:12:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:07.612 09:12:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.612 09:12:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:07.612 09:12:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:07.612 09:12:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.612 09:12:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.612 09:12:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.612 09:12:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.612 09:12:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.612 09:12:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.612 09:12:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.612 "name": "raid_bdev1", 00:13:07.612 "uuid": "42edaddb-b9f6-4584-b4d6-c2456991b98c", 00:13:07.612 "strip_size_kb": 0, 00:13:07.612 "state": "online", 00:13:07.612 "raid_level": "raid1", 00:13:07.612 "superblock": false, 00:13:07.612 "num_base_bdevs": 2, 00:13:07.612 "num_base_bdevs_discovered": 2, 00:13:07.612 "num_base_bdevs_operational": 2, 00:13:07.612 "process": { 00:13:07.612 "type": "rebuild", 00:13:07.612 "target": "spare", 00:13:07.612 "progress": { 00:13:07.612 "blocks": 63488, 00:13:07.612 "percent": 96 00:13:07.612 } 00:13:07.612 }, 00:13:07.612 "base_bdevs_list": [ 00:13:07.612 { 00:13:07.612 "name": "spare", 00:13:07.612 "uuid": "e6cee4fe-c849-5bf3-bf2a-6a6c7784c4a0", 00:13:07.612 "is_configured": true, 00:13:07.612 "data_offset": 0, 00:13:07.612 "data_size": 65536 00:13:07.612 }, 00:13:07.612 { 00:13:07.612 "name": "BaseBdev2", 00:13:07.612 "uuid": "8c599626-9091-5531-a336-3682bf51e072", 00:13:07.612 "is_configured": true, 00:13:07.612 "data_offset": 0, 00:13:07.612 "data_size": 65536 00:13:07.612 } 00:13:07.612 ] 00:13:07.612 }' 00:13:07.612 09:12:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.612 [2024-10-15 09:12:25.464128] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:07.612 09:12:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:07.612 09:12:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.871 09:12:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:07.871 09:12:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:07.871 [2024-10-15 09:12:25.563985] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:07.871 [2024-10-15 09:12:25.566435] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.808 96.12 IOPS, 288.38 MiB/s [2024-10-15T09:12:26.704Z] 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:08.808 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.808 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.809 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.809 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.809 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.809 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.809 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.809 09:12:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.809 09:12:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.809 09:12:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.809 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.809 "name": "raid_bdev1", 00:13:08.809 "uuid": "42edaddb-b9f6-4584-b4d6-c2456991b98c", 00:13:08.809 "strip_size_kb": 0, 00:13:08.809 "state": "online", 00:13:08.809 "raid_level": "raid1", 00:13:08.809 "superblock": false, 00:13:08.809 "num_base_bdevs": 2, 00:13:08.809 "num_base_bdevs_discovered": 2, 00:13:08.809 "num_base_bdevs_operational": 2, 00:13:08.809 "base_bdevs_list": [ 00:13:08.809 { 00:13:08.809 "name": "spare", 00:13:08.809 "uuid": "e6cee4fe-c849-5bf3-bf2a-6a6c7784c4a0", 00:13:08.809 "is_configured": true, 00:13:08.809 "data_offset": 0, 00:13:08.809 "data_size": 65536 00:13:08.809 }, 00:13:08.809 { 00:13:08.809 "name": "BaseBdev2", 00:13:08.809 "uuid": "8c599626-9091-5531-a336-3682bf51e072", 00:13:08.809 "is_configured": true, 00:13:08.809 "data_offset": 0, 00:13:08.809 "data_size": 65536 00:13:08.809 } 00:13:08.809 ] 00:13:08.809 }' 00:13:08.809 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.809 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:08.809 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.809 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:08.809 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:08.809 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:08.809 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.809 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:08.809 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:08.809 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.809 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.809 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.809 09:12:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.809 09:12:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.068 09:12:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.068 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.068 "name": "raid_bdev1", 00:13:09.068 "uuid": "42edaddb-b9f6-4584-b4d6-c2456991b98c", 00:13:09.068 "strip_size_kb": 0, 00:13:09.068 "state": "online", 00:13:09.068 "raid_level": "raid1", 00:13:09.068 "superblock": false, 00:13:09.068 "num_base_bdevs": 2, 00:13:09.068 "num_base_bdevs_discovered": 2, 00:13:09.068 "num_base_bdevs_operational": 2, 00:13:09.068 "base_bdevs_list": [ 00:13:09.068 { 00:13:09.068 "name": "spare", 00:13:09.068 "uuid": "e6cee4fe-c849-5bf3-bf2a-6a6c7784c4a0", 00:13:09.068 "is_configured": true, 00:13:09.068 "data_offset": 0, 00:13:09.068 "data_size": 65536 00:13:09.068 }, 00:13:09.068 { 00:13:09.068 "name": "BaseBdev2", 00:13:09.068 "uuid": "8c599626-9091-5531-a336-3682bf51e072", 00:13:09.068 "is_configured": true, 00:13:09.068 "data_offset": 0, 00:13:09.068 "data_size": 65536 00:13:09.068 } 00:13:09.068 ] 00:13:09.068 }' 00:13:09.068 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.068 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:09.068 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.068 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:09.068 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:09.068 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.068 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.068 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.068 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.068 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:09.068 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.068 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.068 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.068 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.068 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.068 09:12:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.068 09:12:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.068 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.068 09:12:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.068 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.068 "name": "raid_bdev1", 00:13:09.068 "uuid": "42edaddb-b9f6-4584-b4d6-c2456991b98c", 00:13:09.068 "strip_size_kb": 0, 00:13:09.068 "state": "online", 00:13:09.068 "raid_level": "raid1", 00:13:09.068 "superblock": false, 00:13:09.068 "num_base_bdevs": 2, 00:13:09.068 "num_base_bdevs_discovered": 2, 00:13:09.068 "num_base_bdevs_operational": 2, 00:13:09.068 "base_bdevs_list": [ 00:13:09.068 { 00:13:09.068 "name": "spare", 00:13:09.068 "uuid": "e6cee4fe-c849-5bf3-bf2a-6a6c7784c4a0", 00:13:09.068 "is_configured": true, 00:13:09.068 "data_offset": 0, 00:13:09.068 "data_size": 65536 00:13:09.068 }, 00:13:09.068 { 00:13:09.068 "name": "BaseBdev2", 00:13:09.068 "uuid": "8c599626-9091-5531-a336-3682bf51e072", 00:13:09.068 "is_configured": true, 00:13:09.068 "data_offset": 0, 00:13:09.068 "data_size": 65536 00:13:09.068 } 00:13:09.068 ] 00:13:09.068 }' 00:13:09.068 09:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.068 09:12:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.636 09:12:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:09.636 09:12:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.636 09:12:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.636 [2024-10-15 09:12:27.317394] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:09.636 [2024-10-15 09:12:27.317447] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:09.636 89.89 IOPS, 269.67 MiB/s 00:13:09.636 Latency(us) 00:13:09.636 [2024-10-15T09:12:27.532Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:09.636 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:09.636 raid_bdev1 : 9.07 89.33 268.00 0.00 0.00 15208.02 339.84 135536.46 00:13:09.636 [2024-10-15T09:12:27.532Z] =================================================================================================================== 00:13:09.636 [2024-10-15T09:12:27.532Z] Total : 89.33 268.00 0.00 0.00 15208.02 339.84 135536.46 00:13:09.636 [2024-10-15 09:12:27.436178] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.636 [2024-10-15 09:12:27.436243] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:09.636 [2024-10-15 09:12:27.436346] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:09.636 [2024-10-15 09:12:27.436359] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:09.636 { 00:13:09.636 "results": [ 00:13:09.636 { 00:13:09.636 "job": "raid_bdev1", 00:13:09.636 "core_mask": "0x1", 00:13:09.636 "workload": "randrw", 00:13:09.636 "percentage": 50, 00:13:09.636 "status": "finished", 00:13:09.636 "queue_depth": 2, 00:13:09.636 "io_size": 3145728, 00:13:09.636 "runtime": 9.067077, 00:13:09.636 "iops": 89.3341922650486, 00:13:09.636 "mibps": 268.0025767951458, 00:13:09.636 "io_failed": 0, 00:13:09.636 "io_timeout": 0, 00:13:09.636 "avg_latency_us": 15208.015854223948, 00:13:09.636 "min_latency_us": 339.8427947598253, 00:13:09.636 "max_latency_us": 135536.46113537118 00:13:09.636 } 00:13:09.636 ], 00:13:09.636 "core_count": 1 00:13:09.636 } 00:13:09.636 09:12:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.636 09:12:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:09.636 09:12:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.636 09:12:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.636 09:12:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.636 09:12:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.636 09:12:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:09.636 09:12:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:09.636 09:12:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:09.636 09:12:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:09.636 09:12:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:09.636 09:12:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:09.636 09:12:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:09.636 09:12:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:09.636 09:12:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:09.636 09:12:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:09.636 09:12:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:09.636 09:12:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:09.636 09:12:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:09.895 /dev/nbd0 00:13:09.895 09:12:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:09.895 09:12:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:09.895 09:12:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:09.895 09:12:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:09.895 09:12:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:09.895 09:12:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:09.895 09:12:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:09.895 09:12:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:09.895 09:12:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:09.895 09:12:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:09.895 09:12:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:09.895 1+0 records in 00:13:09.895 1+0 records out 00:13:09.895 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463281 s, 8.8 MB/s 00:13:09.895 09:12:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.895 09:12:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:09.895 09:12:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.895 09:12:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:09.895 09:12:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:09.895 09:12:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:09.895 09:12:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:09.895 09:12:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:09.895 09:12:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:09.895 09:12:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:09.895 09:12:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:09.895 09:12:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:09.895 09:12:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:09.895 09:12:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:09.895 09:12:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:09.895 09:12:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:09.895 09:12:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:09.895 09:12:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:09.895 09:12:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:10.168 /dev/nbd1 00:13:10.168 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:10.168 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:10.168 09:12:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:10.168 09:12:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:10.168 09:12:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:10.168 09:12:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:10.168 09:12:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:10.168 09:12:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:10.168 09:12:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:10.168 09:12:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:10.168 09:12:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:10.168 1+0 records in 00:13:10.168 1+0 records out 00:13:10.168 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488736 s, 8.4 MB/s 00:13:10.168 09:12:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.169 09:12:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:10.169 09:12:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.448 09:12:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:10.448 09:12:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:10.448 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:10.448 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:10.448 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:10.448 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:10.448 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:10.448 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:10.448 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:10.448 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:10.448 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.448 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:10.720 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:10.720 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:10.720 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:10.720 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.720 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.720 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:10.720 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:10.720 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.720 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:10.720 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:10.720 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:10.720 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:10.720 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:10.720 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.720 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:10.979 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:10.979 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:10.979 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:10.979 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.979 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.979 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:10.979 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:10.979 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.979 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:10.979 09:12:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76606 00:13:10.979 09:12:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 76606 ']' 00:13:10.979 09:12:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 76606 00:13:10.979 09:12:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:13:10.979 09:12:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:10.979 09:12:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76606 00:13:10.979 09:12:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:10.979 09:12:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:10.979 09:12:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76606' 00:13:10.979 killing process with pid 76606 00:13:10.979 09:12:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 76606 00:13:10.979 Received shutdown signal, test time was about 10.485672 seconds 00:13:10.980 00:13:10.980 Latency(us) 00:13:10.980 [2024-10-15T09:12:28.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:10.980 [2024-10-15T09:12:28.876Z] =================================================================================================================== 00:13:10.980 [2024-10-15T09:12:28.876Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:10.980 [2024-10-15 09:12:28.829151] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:10.980 09:12:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 76606 00:13:11.239 [2024-10-15 09:12:29.068084] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:12.638 00:13:12.638 real 0m13.875s 00:13:12.638 user 0m17.323s 00:13:12.638 sys 0m1.726s 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.638 ************************************ 00:13:12.638 END TEST raid_rebuild_test_io 00:13:12.638 ************************************ 00:13:12.638 09:12:30 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:13:12.638 09:12:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:12.638 09:12:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:12.638 09:12:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:12.638 ************************************ 00:13:12.638 START TEST raid_rebuild_test_sb_io 00:13:12.638 ************************************ 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77003 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77003 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 77003 ']' 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:12.638 09:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.638 [2024-10-15 09:12:30.486648] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:13:12.638 [2024-10-15 09:12:30.486897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:12.638 Zero copy mechanism will not be used. 00:13:12.638 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77003 ] 00:13:12.898 [2024-10-15 09:12:30.654637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.158 [2024-10-15 09:12:30.806171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.421 [2024-10-15 09:12:31.071223] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:13.421 [2024-10-15 09:12:31.071429] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.697 BaseBdev1_malloc 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.697 [2024-10-15 09:12:31.420647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:13.697 [2024-10-15 09:12:31.420820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.697 [2024-10-15 09:12:31.420859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:13.697 [2024-10-15 09:12:31.420885] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.697 [2024-10-15 09:12:31.423910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.697 [2024-10-15 09:12:31.423996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:13.697 BaseBdev1 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.697 BaseBdev2_malloc 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.697 [2024-10-15 09:12:31.485692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:13.697 [2024-10-15 09:12:31.485834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.697 [2024-10-15 09:12:31.485860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:13.697 [2024-10-15 09:12:31.485872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.697 [2024-10-15 09:12:31.488339] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.697 [2024-10-15 09:12:31.488378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:13.697 BaseBdev2 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.697 spare_malloc 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.697 spare_delay 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.697 [2024-10-15 09:12:31.576208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:13.697 [2024-10-15 09:12:31.576397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.697 [2024-10-15 09:12:31.576467] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:13.697 [2024-10-15 09:12:31.576542] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.697 [2024-10-15 09:12:31.580485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.697 [2024-10-15 09:12:31.580664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:13.697 spare 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.697 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.697 [2024-10-15 09:12:31.589139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:13.957 [2024-10-15 09:12:31.592304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:13.957 [2024-10-15 09:12:31.592629] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:13.957 [2024-10-15 09:12:31.592660] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:13.957 [2024-10-15 09:12:31.593190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:13.957 [2024-10-15 09:12:31.593487] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:13.957 [2024-10-15 09:12:31.593513] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:13.957 [2024-10-15 09:12:31.593909] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.957 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.957 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:13.957 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.957 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.957 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.957 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.957 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:13.957 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.957 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.957 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.957 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.957 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.957 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.957 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.957 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.957 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.957 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.957 "name": "raid_bdev1", 00:13:13.957 "uuid": "d5578320-1387-4053-9682-f9e67506fd37", 00:13:13.957 "strip_size_kb": 0, 00:13:13.957 "state": "online", 00:13:13.957 "raid_level": "raid1", 00:13:13.957 "superblock": true, 00:13:13.957 "num_base_bdevs": 2, 00:13:13.957 "num_base_bdevs_discovered": 2, 00:13:13.957 "num_base_bdevs_operational": 2, 00:13:13.957 "base_bdevs_list": [ 00:13:13.957 { 00:13:13.957 "name": "BaseBdev1", 00:13:13.957 "uuid": "61aa9bd3-8988-5937-bb2b-7a8e29c14ac9", 00:13:13.957 "is_configured": true, 00:13:13.957 "data_offset": 2048, 00:13:13.957 "data_size": 63488 00:13:13.957 }, 00:13:13.957 { 00:13:13.957 "name": "BaseBdev2", 00:13:13.957 "uuid": "72a71b80-06e0-53b0-b268-cc872c00015c", 00:13:13.957 "is_configured": true, 00:13:13.957 "data_offset": 2048, 00:13:13.957 "data_size": 63488 00:13:13.957 } 00:13:13.957 ] 00:13:13.957 }' 00:13:13.957 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.957 09:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.216 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:14.216 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:14.216 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.216 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.216 [2024-10-15 09:12:32.069411] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:14.216 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.216 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:14.216 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.216 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.216 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.216 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:14.475 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.475 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:14.475 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:14.475 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:14.475 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:14.475 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.475 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.475 [2024-10-15 09:12:32.156894] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:14.475 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.475 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:14.475 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.475 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.475 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.475 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.475 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:14.475 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.475 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.475 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.475 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.475 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.475 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.475 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.475 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.475 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.475 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.475 "name": "raid_bdev1", 00:13:14.475 "uuid": "d5578320-1387-4053-9682-f9e67506fd37", 00:13:14.475 "strip_size_kb": 0, 00:13:14.475 "state": "online", 00:13:14.475 "raid_level": "raid1", 00:13:14.475 "superblock": true, 00:13:14.475 "num_base_bdevs": 2, 00:13:14.475 "num_base_bdevs_discovered": 1, 00:13:14.475 "num_base_bdevs_operational": 1, 00:13:14.475 "base_bdevs_list": [ 00:13:14.475 { 00:13:14.475 "name": null, 00:13:14.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.475 "is_configured": false, 00:13:14.475 "data_offset": 0, 00:13:14.475 "data_size": 63488 00:13:14.475 }, 00:13:14.475 { 00:13:14.475 "name": "BaseBdev2", 00:13:14.476 "uuid": "72a71b80-06e0-53b0-b268-cc872c00015c", 00:13:14.476 "is_configured": true, 00:13:14.476 "data_offset": 2048, 00:13:14.476 "data_size": 63488 00:13:14.476 } 00:13:14.476 ] 00:13:14.476 }' 00:13:14.476 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.476 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.476 [2024-10-15 09:12:32.251819] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:14.476 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:14.476 Zero copy mechanism will not be used. 00:13:14.476 Running I/O for 60 seconds... 00:13:14.735 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:14.735 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.735 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.735 [2024-10-15 09:12:32.600938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:14.994 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.994 09:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:14.994 [2024-10-15 09:12:32.675769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:14.994 [2024-10-15 09:12:32.678380] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:14.994 [2024-10-15 09:12:32.789645] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:14.994 [2024-10-15 09:12:32.790724] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:15.253 [2024-10-15 09:12:32.912624] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:15.253 [2024-10-15 09:12:32.913322] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:15.512 [2024-10-15 09:12:33.175068] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:15.512 [2024-10-15 09:12:33.176257] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:15.512 162.00 IOPS, 486.00 MiB/s [2024-10-15T09:12:33.408Z] [2024-10-15 09:12:33.397179] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:15.772 [2024-10-15 09:12:33.622437] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:15.772 [2024-10-15 09:12:33.625169] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:15.772 09:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:15.772 09:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.772 09:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:15.772 09:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:15.772 09:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.772 09:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.772 09:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.772 09:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.772 09:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.031 09:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.032 09:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.032 "name": "raid_bdev1", 00:13:16.032 "uuid": "d5578320-1387-4053-9682-f9e67506fd37", 00:13:16.032 "strip_size_kb": 0, 00:13:16.032 "state": "online", 00:13:16.032 "raid_level": "raid1", 00:13:16.032 "superblock": true, 00:13:16.032 "num_base_bdevs": 2, 00:13:16.032 "num_base_bdevs_discovered": 2, 00:13:16.032 "num_base_bdevs_operational": 2, 00:13:16.032 "process": { 00:13:16.032 "type": "rebuild", 00:13:16.032 "target": "spare", 00:13:16.032 "progress": { 00:13:16.032 "blocks": 14336, 00:13:16.032 "percent": 22 00:13:16.032 } 00:13:16.032 }, 00:13:16.032 "base_bdevs_list": [ 00:13:16.032 { 00:13:16.032 "name": "spare", 00:13:16.032 "uuid": "0eef2ed4-a001-5e02-91fc-d8d6c74ebc11", 00:13:16.032 "is_configured": true, 00:13:16.032 "data_offset": 2048, 00:13:16.032 "data_size": 63488 00:13:16.032 }, 00:13:16.032 { 00:13:16.032 "name": "BaseBdev2", 00:13:16.032 "uuid": "72a71b80-06e0-53b0-b268-cc872c00015c", 00:13:16.032 "is_configured": true, 00:13:16.032 "data_offset": 2048, 00:13:16.032 "data_size": 63488 00:13:16.032 } 00:13:16.032 ] 00:13:16.032 }' 00:13:16.032 09:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.032 09:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:16.032 09:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.032 09:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:16.032 09:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:16.032 09:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.032 09:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.032 [2024-10-15 09:12:33.805133] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:16.032 [2024-10-15 09:12:33.835112] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:16.291 [2024-10-15 09:12:33.937753] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:16.291 [2024-10-15 09:12:33.943065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.291 [2024-10-15 09:12:33.943184] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:16.291 [2024-10-15 09:12:33.943224] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:16.291 [2024-10-15 09:12:34.000948] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:16.291 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.291 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:16.291 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.291 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.291 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.291 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.291 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:16.291 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.291 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.291 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.291 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.291 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.291 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.291 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.291 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.291 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.291 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.291 "name": "raid_bdev1", 00:13:16.291 "uuid": "d5578320-1387-4053-9682-f9e67506fd37", 00:13:16.291 "strip_size_kb": 0, 00:13:16.291 "state": "online", 00:13:16.291 "raid_level": "raid1", 00:13:16.291 "superblock": true, 00:13:16.291 "num_base_bdevs": 2, 00:13:16.291 "num_base_bdevs_discovered": 1, 00:13:16.291 "num_base_bdevs_operational": 1, 00:13:16.291 "base_bdevs_list": [ 00:13:16.291 { 00:13:16.291 "name": null, 00:13:16.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.291 "is_configured": false, 00:13:16.291 "data_offset": 0, 00:13:16.291 "data_size": 63488 00:13:16.291 }, 00:13:16.291 { 00:13:16.291 "name": "BaseBdev2", 00:13:16.291 "uuid": "72a71b80-06e0-53b0-b268-cc872c00015c", 00:13:16.291 "is_configured": true, 00:13:16.291 "data_offset": 2048, 00:13:16.291 "data_size": 63488 00:13:16.291 } 00:13:16.291 ] 00:13:16.291 }' 00:13:16.291 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.291 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.829 131.00 IOPS, 393.00 MiB/s [2024-10-15T09:12:34.725Z] 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:16.829 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.829 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:16.829 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:16.829 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.829 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.829 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.829 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.829 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.829 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.829 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.829 "name": "raid_bdev1", 00:13:16.829 "uuid": "d5578320-1387-4053-9682-f9e67506fd37", 00:13:16.829 "strip_size_kb": 0, 00:13:16.829 "state": "online", 00:13:16.829 "raid_level": "raid1", 00:13:16.829 "superblock": true, 00:13:16.829 "num_base_bdevs": 2, 00:13:16.829 "num_base_bdevs_discovered": 1, 00:13:16.829 "num_base_bdevs_operational": 1, 00:13:16.829 "base_bdevs_list": [ 00:13:16.829 { 00:13:16.829 "name": null, 00:13:16.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.829 "is_configured": false, 00:13:16.829 "data_offset": 0, 00:13:16.829 "data_size": 63488 00:13:16.829 }, 00:13:16.829 { 00:13:16.829 "name": "BaseBdev2", 00:13:16.829 "uuid": "72a71b80-06e0-53b0-b268-cc872c00015c", 00:13:16.829 "is_configured": true, 00:13:16.829 "data_offset": 2048, 00:13:16.829 "data_size": 63488 00:13:16.829 } 00:13:16.829 ] 00:13:16.829 }' 00:13:16.829 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.829 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:16.829 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.829 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:16.829 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:16.829 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.829 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.829 [2024-10-15 09:12:34.698347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:17.089 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.089 09:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:17.089 [2024-10-15 09:12:34.763235] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:17.089 [2024-10-15 09:12:34.765263] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:17.089 [2024-10-15 09:12:34.879984] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:17.089 [2024-10-15 09:12:34.880744] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:17.349 [2024-10-15 09:12:35.091186] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:17.349 [2024-10-15 09:12:35.091565] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:17.608 144.67 IOPS, 434.00 MiB/s [2024-10-15T09:12:35.504Z] [2024-10-15 09:12:35.339903] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:17.608 [2024-10-15 09:12:35.340562] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:17.865 [2024-10-15 09:12:35.561729] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:17.865 [2024-10-15 09:12:35.562134] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:17.866 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:17.866 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.866 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:17.866 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:17.866 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.866 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.866 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.866 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.866 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.123 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.123 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.123 "name": "raid_bdev1", 00:13:18.123 "uuid": "d5578320-1387-4053-9682-f9e67506fd37", 00:13:18.123 "strip_size_kb": 0, 00:13:18.123 "state": "online", 00:13:18.123 "raid_level": "raid1", 00:13:18.123 "superblock": true, 00:13:18.123 "num_base_bdevs": 2, 00:13:18.123 "num_base_bdevs_discovered": 2, 00:13:18.123 "num_base_bdevs_operational": 2, 00:13:18.123 "process": { 00:13:18.123 "type": "rebuild", 00:13:18.123 "target": "spare", 00:13:18.123 "progress": { 00:13:18.123 "blocks": 10240, 00:13:18.123 "percent": 16 00:13:18.123 } 00:13:18.123 }, 00:13:18.123 "base_bdevs_list": [ 00:13:18.123 { 00:13:18.123 "name": "spare", 00:13:18.123 "uuid": "0eef2ed4-a001-5e02-91fc-d8d6c74ebc11", 00:13:18.123 "is_configured": true, 00:13:18.123 "data_offset": 2048, 00:13:18.123 "data_size": 63488 00:13:18.123 }, 00:13:18.123 { 00:13:18.123 "name": "BaseBdev2", 00:13:18.123 "uuid": "72a71b80-06e0-53b0-b268-cc872c00015c", 00:13:18.123 "is_configured": true, 00:13:18.123 "data_offset": 2048, 00:13:18.123 "data_size": 63488 00:13:18.123 } 00:13:18.123 ] 00:13:18.123 }' 00:13:18.123 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.123 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:18.123 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.123 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:18.123 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:18.123 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:18.123 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:18.123 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:18.123 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:18.123 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:18.123 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=439 00:13:18.123 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:18.123 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:18.123 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.123 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:18.123 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:18.123 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.123 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.123 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.123 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.123 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.124 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.124 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.124 "name": "raid_bdev1", 00:13:18.124 "uuid": "d5578320-1387-4053-9682-f9e67506fd37", 00:13:18.124 "strip_size_kb": 0, 00:13:18.124 "state": "online", 00:13:18.124 "raid_level": "raid1", 00:13:18.124 "superblock": true, 00:13:18.124 "num_base_bdevs": 2, 00:13:18.124 "num_base_bdevs_discovered": 2, 00:13:18.124 "num_base_bdevs_operational": 2, 00:13:18.124 "process": { 00:13:18.124 "type": "rebuild", 00:13:18.124 "target": "spare", 00:13:18.124 "progress": { 00:13:18.124 "blocks": 12288, 00:13:18.124 "percent": 19 00:13:18.124 } 00:13:18.124 }, 00:13:18.124 "base_bdevs_list": [ 00:13:18.124 { 00:13:18.124 "name": "spare", 00:13:18.124 "uuid": "0eef2ed4-a001-5e02-91fc-d8d6c74ebc11", 00:13:18.124 "is_configured": true, 00:13:18.124 "data_offset": 2048, 00:13:18.124 "data_size": 63488 00:13:18.124 }, 00:13:18.124 { 00:13:18.124 "name": "BaseBdev2", 00:13:18.124 "uuid": "72a71b80-06e0-53b0-b268-cc872c00015c", 00:13:18.124 "is_configured": true, 00:13:18.124 "data_offset": 2048, 00:13:18.124 "data_size": 63488 00:13:18.124 } 00:13:18.124 ] 00:13:18.124 }' 00:13:18.124 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.124 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:18.124 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.124 [2024-10-15 09:12:35.985790] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:18.124 [2024-10-15 09:12:35.986188] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:18.124 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:18.124 09:12:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:18.690 123.00 IOPS, 369.00 MiB/s [2024-10-15T09:12:36.586Z] [2024-10-15 09:12:36.494500] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:18.950 [2024-10-15 09:12:36.821688] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:19.208 09:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:19.208 09:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.208 09:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.208 09:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.208 09:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.208 09:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.208 09:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.208 09:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.208 09:12:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.208 09:12:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.209 09:12:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.209 09:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.209 "name": "raid_bdev1", 00:13:19.209 "uuid": "d5578320-1387-4053-9682-f9e67506fd37", 00:13:19.209 "strip_size_kb": 0, 00:13:19.209 "state": "online", 00:13:19.209 "raid_level": "raid1", 00:13:19.209 "superblock": true, 00:13:19.209 "num_base_bdevs": 2, 00:13:19.209 "num_base_bdevs_discovered": 2, 00:13:19.209 "num_base_bdevs_operational": 2, 00:13:19.209 "process": { 00:13:19.209 "type": "rebuild", 00:13:19.209 "target": "spare", 00:13:19.209 "progress": { 00:13:19.209 "blocks": 26624, 00:13:19.209 "percent": 41 00:13:19.209 } 00:13:19.209 }, 00:13:19.209 "base_bdevs_list": [ 00:13:19.209 { 00:13:19.209 "name": "spare", 00:13:19.209 "uuid": "0eef2ed4-a001-5e02-91fc-d8d6c74ebc11", 00:13:19.209 "is_configured": true, 00:13:19.209 "data_offset": 2048, 00:13:19.209 "data_size": 63488 00:13:19.209 }, 00:13:19.209 { 00:13:19.209 "name": "BaseBdev2", 00:13:19.209 "uuid": "72a71b80-06e0-53b0-b268-cc872c00015c", 00:13:19.209 "is_configured": true, 00:13:19.209 "data_offset": 2048, 00:13:19.209 "data_size": 63488 00:13:19.209 } 00:13:19.209 ] 00:13:19.209 }' 00:13:19.209 09:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.209 09:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.209 09:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.209 09:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.209 09:12:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:19.467 107.80 IOPS, 323.40 MiB/s [2024-10-15T09:12:37.363Z] [2024-10-15 09:12:37.291794] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:19.726 [2024-10-15 09:12:37.507881] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:19.984 [2024-10-15 09:12:37.747760] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:19.984 [2024-10-15 09:12:37.748496] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:20.249 [2024-10-15 09:12:37.958417] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:20.249 [2024-10-15 09:12:37.958895] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:20.249 09:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:20.249 09:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:20.249 09:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.249 09:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:20.249 09:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:20.249 09:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.249 09:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.249 09:12:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.249 09:12:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.249 09:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.249 09:12:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.509 09:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.509 "name": "raid_bdev1", 00:13:20.509 "uuid": "d5578320-1387-4053-9682-f9e67506fd37", 00:13:20.509 "strip_size_kb": 0, 00:13:20.509 "state": "online", 00:13:20.509 "raid_level": "raid1", 00:13:20.509 "superblock": true, 00:13:20.509 "num_base_bdevs": 2, 00:13:20.509 "num_base_bdevs_discovered": 2, 00:13:20.509 "num_base_bdevs_operational": 2, 00:13:20.509 "process": { 00:13:20.509 "type": "rebuild", 00:13:20.509 "target": "spare", 00:13:20.509 "progress": { 00:13:20.509 "blocks": 40960, 00:13:20.509 "percent": 64 00:13:20.509 } 00:13:20.509 }, 00:13:20.509 "base_bdevs_list": [ 00:13:20.509 { 00:13:20.509 "name": "spare", 00:13:20.509 "uuid": "0eef2ed4-a001-5e02-91fc-d8d6c74ebc11", 00:13:20.509 "is_configured": true, 00:13:20.509 "data_offset": 2048, 00:13:20.509 "data_size": 63488 00:13:20.509 }, 00:13:20.509 { 00:13:20.509 "name": "BaseBdev2", 00:13:20.509 "uuid": "72a71b80-06e0-53b0-b268-cc872c00015c", 00:13:20.509 "is_configured": true, 00:13:20.509 "data_offset": 2048, 00:13:20.509 "data_size": 63488 00:13:20.509 } 00:13:20.509 ] 00:13:20.509 }' 00:13:20.509 09:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.509 09:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:20.509 09:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.509 09:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:20.509 09:12:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:20.509 97.00 IOPS, 291.00 MiB/s [2024-10-15T09:12:38.405Z] [2024-10-15 09:12:38.296037] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:20.768 [2024-10-15 09:12:38.419267] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:21.026 [2024-10-15 09:12:38.773186] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:21.593 09:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:21.593 09:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.594 09:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.594 09:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.594 09:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.594 09:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.594 09:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.594 09:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.594 09:12:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.594 09:12:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.594 09:12:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.594 88.00 IOPS, 264.00 MiB/s [2024-10-15T09:12:39.490Z] 09:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.594 "name": "raid_bdev1", 00:13:21.594 "uuid": "d5578320-1387-4053-9682-f9e67506fd37", 00:13:21.594 "strip_size_kb": 0, 00:13:21.594 "state": "online", 00:13:21.594 "raid_level": "raid1", 00:13:21.594 "superblock": true, 00:13:21.594 "num_base_bdevs": 2, 00:13:21.594 "num_base_bdevs_discovered": 2, 00:13:21.594 "num_base_bdevs_operational": 2, 00:13:21.594 "process": { 00:13:21.594 "type": "rebuild", 00:13:21.594 "target": "spare", 00:13:21.594 "progress": { 00:13:21.594 "blocks": 59392, 00:13:21.594 "percent": 93 00:13:21.594 } 00:13:21.594 }, 00:13:21.594 "base_bdevs_list": [ 00:13:21.594 { 00:13:21.594 "name": "spare", 00:13:21.594 "uuid": "0eef2ed4-a001-5e02-91fc-d8d6c74ebc11", 00:13:21.594 "is_configured": true, 00:13:21.594 "data_offset": 2048, 00:13:21.594 "data_size": 63488 00:13:21.594 }, 00:13:21.594 { 00:13:21.594 "name": "BaseBdev2", 00:13:21.594 "uuid": "72a71b80-06e0-53b0-b268-cc872c00015c", 00:13:21.594 "is_configured": true, 00:13:21.594 "data_offset": 2048, 00:13:21.594 "data_size": 63488 00:13:21.594 } 00:13:21.594 ] 00:13:21.594 }' 00:13:21.594 09:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.594 09:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:21.594 09:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.594 09:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:21.594 09:12:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:21.594 [2024-10-15 09:12:39.431057] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:21.852 [2024-10-15 09:12:39.530934] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:21.852 [2024-10-15 09:12:39.533420] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.676 81.62 IOPS, 244.88 MiB/s [2024-10-15T09:12:40.572Z] 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:22.676 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.676 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.676 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.676 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.676 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.676 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.677 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.677 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.677 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.677 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.677 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.677 "name": "raid_bdev1", 00:13:22.677 "uuid": "d5578320-1387-4053-9682-f9e67506fd37", 00:13:22.677 "strip_size_kb": 0, 00:13:22.677 "state": "online", 00:13:22.677 "raid_level": "raid1", 00:13:22.677 "superblock": true, 00:13:22.677 "num_base_bdevs": 2, 00:13:22.677 "num_base_bdevs_discovered": 2, 00:13:22.677 "num_base_bdevs_operational": 2, 00:13:22.677 "base_bdevs_list": [ 00:13:22.677 { 00:13:22.677 "name": "spare", 00:13:22.677 "uuid": "0eef2ed4-a001-5e02-91fc-d8d6c74ebc11", 00:13:22.677 "is_configured": true, 00:13:22.677 "data_offset": 2048, 00:13:22.677 "data_size": 63488 00:13:22.677 }, 00:13:22.677 { 00:13:22.677 "name": "BaseBdev2", 00:13:22.677 "uuid": "72a71b80-06e0-53b0-b268-cc872c00015c", 00:13:22.677 "is_configured": true, 00:13:22.677 "data_offset": 2048, 00:13:22.677 "data_size": 63488 00:13:22.677 } 00:13:22.677 ] 00:13:22.677 }' 00:13:22.677 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.677 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:22.677 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.677 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:22.677 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:22.677 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:22.677 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.677 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:22.677 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:22.677 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.677 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.677 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.677 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.677 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.935 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.935 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.935 "name": "raid_bdev1", 00:13:22.935 "uuid": "d5578320-1387-4053-9682-f9e67506fd37", 00:13:22.935 "strip_size_kb": 0, 00:13:22.935 "state": "online", 00:13:22.935 "raid_level": "raid1", 00:13:22.935 "superblock": true, 00:13:22.935 "num_base_bdevs": 2, 00:13:22.935 "num_base_bdevs_discovered": 2, 00:13:22.935 "num_base_bdevs_operational": 2, 00:13:22.935 "base_bdevs_list": [ 00:13:22.935 { 00:13:22.935 "name": "spare", 00:13:22.935 "uuid": "0eef2ed4-a001-5e02-91fc-d8d6c74ebc11", 00:13:22.935 "is_configured": true, 00:13:22.935 "data_offset": 2048, 00:13:22.935 "data_size": 63488 00:13:22.935 }, 00:13:22.935 { 00:13:22.935 "name": "BaseBdev2", 00:13:22.935 "uuid": "72a71b80-06e0-53b0-b268-cc872c00015c", 00:13:22.935 "is_configured": true, 00:13:22.935 "data_offset": 2048, 00:13:22.935 "data_size": 63488 00:13:22.935 } 00:13:22.935 ] 00:13:22.935 }' 00:13:22.935 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.935 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:22.935 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.935 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:22.935 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:22.935 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.935 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.935 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.935 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.935 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:22.935 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.935 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.935 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.935 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.935 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.935 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.935 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.935 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.935 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.935 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.935 "name": "raid_bdev1", 00:13:22.935 "uuid": "d5578320-1387-4053-9682-f9e67506fd37", 00:13:22.935 "strip_size_kb": 0, 00:13:22.935 "state": "online", 00:13:22.935 "raid_level": "raid1", 00:13:22.935 "superblock": true, 00:13:22.935 "num_base_bdevs": 2, 00:13:22.935 "num_base_bdevs_discovered": 2, 00:13:22.935 "num_base_bdevs_operational": 2, 00:13:22.935 "base_bdevs_list": [ 00:13:22.935 { 00:13:22.935 "name": "spare", 00:13:22.935 "uuid": "0eef2ed4-a001-5e02-91fc-d8d6c74ebc11", 00:13:22.935 "is_configured": true, 00:13:22.935 "data_offset": 2048, 00:13:22.935 "data_size": 63488 00:13:22.935 }, 00:13:22.935 { 00:13:22.935 "name": "BaseBdev2", 00:13:22.935 "uuid": "72a71b80-06e0-53b0-b268-cc872c00015c", 00:13:22.935 "is_configured": true, 00:13:22.935 "data_offset": 2048, 00:13:22.935 "data_size": 63488 00:13:22.935 } 00:13:22.935 ] 00:13:22.935 }' 00:13:22.935 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.935 09:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.501 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:23.501 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.501 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.501 [2024-10-15 09:12:41.164724] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:23.501 [2024-10-15 09:12:41.164845] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:23.501 00:13:23.501 Latency(us) 00:13:23.501 [2024-10-15T09:12:41.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:23.501 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:23.501 raid_bdev1 : 8.97 77.06 231.17 0.00 0.00 17994.86 338.05 141946.97 00:13:23.501 [2024-10-15T09:12:41.397Z] =================================================================================================================== 00:13:23.501 [2024-10-15T09:12:41.397Z] Total : 77.06 231.17 0.00 0.00 17994.86 338.05 141946.97 00:13:23.501 [2024-10-15 09:12:41.228655] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.501 [2024-10-15 09:12:41.228735] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:23.501 [2024-10-15 09:12:41.228829] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:23.501 [2024-10-15 09:12:41.228842] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:23.501 { 00:13:23.501 "results": [ 00:13:23.501 { 00:13:23.501 "job": "raid_bdev1", 00:13:23.501 "core_mask": "0x1", 00:13:23.501 "workload": "randrw", 00:13:23.501 "percentage": 50, 00:13:23.501 "status": "finished", 00:13:23.501 "queue_depth": 2, 00:13:23.501 "io_size": 3145728, 00:13:23.501 "runtime": 8.967309, 00:13:23.501 "iops": 77.0576769463392, 00:13:23.501 "mibps": 231.1730308390176, 00:13:23.501 "io_failed": 0, 00:13:23.501 "io_timeout": 0, 00:13:23.501 "avg_latency_us": 17994.85847104696, 00:13:23.501 "min_latency_us": 338.05414847161575, 00:13:23.501 "max_latency_us": 141946.96943231442 00:13:23.501 } 00:13:23.501 ], 00:13:23.501 "core_count": 1 00:13:23.501 } 00:13:23.501 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.501 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:23.501 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.501 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.501 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.501 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.501 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:23.501 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:23.501 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:23.501 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:23.501 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:23.501 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:23.501 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:23.501 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:23.501 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:23.501 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:23.501 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:23.501 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:23.501 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:23.759 /dev/nbd0 00:13:23.759 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:23.759 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:23.759 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:23.759 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:23.759 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:23.759 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:23.759 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:23.759 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:23.759 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:23.759 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:23.759 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.759 1+0 records in 00:13:23.759 1+0 records out 00:13:23.759 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000457331 s, 9.0 MB/s 00:13:23.759 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.759 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:23.759 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.759 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:23.759 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:23.759 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:23.759 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:23.759 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:23.759 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:23.759 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:23.759 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:23.759 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:23.759 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:23.759 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:23.759 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:23.759 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:23.759 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:23.759 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:23.759 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:24.017 /dev/nbd1 00:13:24.017 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:24.017 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:24.017 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:24.017 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:24.017 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:24.017 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:24.017 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:24.017 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:24.017 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:24.017 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:24.017 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.017 1+0 records in 00:13:24.017 1+0 records out 00:13:24.017 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567119 s, 7.2 MB/s 00:13:24.017 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.017 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:24.017 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.017 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:24.017 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:24.017 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:24.017 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:24.017 09:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:24.275 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:24.275 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:24.275 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:24.275 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:24.275 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:24.275 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:24.275 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:24.533 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:24.533 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:24.533 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:24.533 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:24.533 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:24.533 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:24.533 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:24.533 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:24.533 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:24.533 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:24.533 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:24.533 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:24.533 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:24.533 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:24.533 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:24.790 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:24.790 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:24.790 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:24.790 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:24.790 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:24.790 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:24.790 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:24.790 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:24.790 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:24.790 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:24.790 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.790 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.790 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.790 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:24.790 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.790 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.790 [2024-10-15 09:12:42.605220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:24.790 [2024-10-15 09:12:42.605362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.790 [2024-10-15 09:12:42.605402] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:24.790 [2024-10-15 09:12:42.605452] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.790 [2024-10-15 09:12:42.607815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.790 [2024-10-15 09:12:42.607898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:24.790 [2024-10-15 09:12:42.608015] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:24.790 [2024-10-15 09:12:42.608104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:24.790 [2024-10-15 09:12:42.608281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:24.790 spare 00:13:24.790 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.790 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:24.790 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.790 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.047 [2024-10-15 09:12:42.708243] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:25.047 [2024-10-15 09:12:42.708384] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:25.047 [2024-10-15 09:12:42.708774] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:13:25.047 [2024-10-15 09:12:42.709071] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:25.047 [2024-10-15 09:12:42.709125] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:25.047 [2024-10-15 09:12:42.709392] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.047 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.047 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:25.047 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.047 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.047 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.047 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.047 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:25.047 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.047 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.047 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.047 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.047 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.048 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.048 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.048 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.048 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.048 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.048 "name": "raid_bdev1", 00:13:25.048 "uuid": "d5578320-1387-4053-9682-f9e67506fd37", 00:13:25.048 "strip_size_kb": 0, 00:13:25.048 "state": "online", 00:13:25.048 "raid_level": "raid1", 00:13:25.048 "superblock": true, 00:13:25.048 "num_base_bdevs": 2, 00:13:25.048 "num_base_bdevs_discovered": 2, 00:13:25.048 "num_base_bdevs_operational": 2, 00:13:25.048 "base_bdevs_list": [ 00:13:25.048 { 00:13:25.048 "name": "spare", 00:13:25.048 "uuid": "0eef2ed4-a001-5e02-91fc-d8d6c74ebc11", 00:13:25.048 "is_configured": true, 00:13:25.048 "data_offset": 2048, 00:13:25.048 "data_size": 63488 00:13:25.048 }, 00:13:25.048 { 00:13:25.048 "name": "BaseBdev2", 00:13:25.048 "uuid": "72a71b80-06e0-53b0-b268-cc872c00015c", 00:13:25.048 "is_configured": true, 00:13:25.048 "data_offset": 2048, 00:13:25.048 "data_size": 63488 00:13:25.048 } 00:13:25.048 ] 00:13:25.048 }' 00:13:25.048 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.048 09:12:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.613 "name": "raid_bdev1", 00:13:25.613 "uuid": "d5578320-1387-4053-9682-f9e67506fd37", 00:13:25.613 "strip_size_kb": 0, 00:13:25.613 "state": "online", 00:13:25.613 "raid_level": "raid1", 00:13:25.613 "superblock": true, 00:13:25.613 "num_base_bdevs": 2, 00:13:25.613 "num_base_bdevs_discovered": 2, 00:13:25.613 "num_base_bdevs_operational": 2, 00:13:25.613 "base_bdevs_list": [ 00:13:25.613 { 00:13:25.613 "name": "spare", 00:13:25.613 "uuid": "0eef2ed4-a001-5e02-91fc-d8d6c74ebc11", 00:13:25.613 "is_configured": true, 00:13:25.613 "data_offset": 2048, 00:13:25.613 "data_size": 63488 00:13:25.613 }, 00:13:25.613 { 00:13:25.613 "name": "BaseBdev2", 00:13:25.613 "uuid": "72a71b80-06e0-53b0-b268-cc872c00015c", 00:13:25.613 "is_configured": true, 00:13:25.613 "data_offset": 2048, 00:13:25.613 "data_size": 63488 00:13:25.613 } 00:13:25.613 ] 00:13:25.613 }' 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.613 [2024-10-15 09:12:43.408307] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.613 "name": "raid_bdev1", 00:13:25.613 "uuid": "d5578320-1387-4053-9682-f9e67506fd37", 00:13:25.613 "strip_size_kb": 0, 00:13:25.613 "state": "online", 00:13:25.613 "raid_level": "raid1", 00:13:25.613 "superblock": true, 00:13:25.613 "num_base_bdevs": 2, 00:13:25.613 "num_base_bdevs_discovered": 1, 00:13:25.613 "num_base_bdevs_operational": 1, 00:13:25.613 "base_bdevs_list": [ 00:13:25.613 { 00:13:25.613 "name": null, 00:13:25.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.613 "is_configured": false, 00:13:25.613 "data_offset": 0, 00:13:25.613 "data_size": 63488 00:13:25.613 }, 00:13:25.613 { 00:13:25.613 "name": "BaseBdev2", 00:13:25.613 "uuid": "72a71b80-06e0-53b0-b268-cc872c00015c", 00:13:25.613 "is_configured": true, 00:13:25.613 "data_offset": 2048, 00:13:25.613 "data_size": 63488 00:13:25.613 } 00:13:25.613 ] 00:13:25.613 }' 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.613 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.179 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:26.179 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.179 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.179 [2024-10-15 09:12:43.927528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:26.179 [2024-10-15 09:12:43.927873] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:26.179 [2024-10-15 09:12:43.927941] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:26.179 [2024-10-15 09:12:43.928015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:26.179 [2024-10-15 09:12:43.946552] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:13:26.179 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.179 09:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:26.179 [2024-10-15 09:12:43.948817] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:27.116 09:12:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.116 09:12:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.116 09:12:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.116 09:12:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.116 09:12:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.116 09:12:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.116 09:12:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.116 09:12:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.116 09:12:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.116 09:12:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.116 09:12:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.116 "name": "raid_bdev1", 00:13:27.116 "uuid": "d5578320-1387-4053-9682-f9e67506fd37", 00:13:27.116 "strip_size_kb": 0, 00:13:27.116 "state": "online", 00:13:27.116 "raid_level": "raid1", 00:13:27.116 "superblock": true, 00:13:27.116 "num_base_bdevs": 2, 00:13:27.116 "num_base_bdevs_discovered": 2, 00:13:27.116 "num_base_bdevs_operational": 2, 00:13:27.116 "process": { 00:13:27.116 "type": "rebuild", 00:13:27.116 "target": "spare", 00:13:27.116 "progress": { 00:13:27.116 "blocks": 20480, 00:13:27.116 "percent": 32 00:13:27.116 } 00:13:27.116 }, 00:13:27.116 "base_bdevs_list": [ 00:13:27.116 { 00:13:27.116 "name": "spare", 00:13:27.116 "uuid": "0eef2ed4-a001-5e02-91fc-d8d6c74ebc11", 00:13:27.116 "is_configured": true, 00:13:27.116 "data_offset": 2048, 00:13:27.116 "data_size": 63488 00:13:27.116 }, 00:13:27.116 { 00:13:27.116 "name": "BaseBdev2", 00:13:27.116 "uuid": "72a71b80-06e0-53b0-b268-cc872c00015c", 00:13:27.116 "is_configured": true, 00:13:27.116 "data_offset": 2048, 00:13:27.116 "data_size": 63488 00:13:27.116 } 00:13:27.116 ] 00:13:27.116 }' 00:13:27.116 09:12:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.375 09:12:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:27.375 09:12:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.375 09:12:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:27.375 09:12:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:27.375 09:12:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.375 09:12:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.376 [2024-10-15 09:12:45.084305] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:27.376 [2024-10-15 09:12:45.155245] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:27.376 [2024-10-15 09:12:45.155426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.376 [2024-10-15 09:12:45.155471] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:27.376 [2024-10-15 09:12:45.155481] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:27.376 09:12:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.376 09:12:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:27.376 09:12:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.376 09:12:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.376 09:12:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.376 09:12:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.376 09:12:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:27.376 09:12:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.376 09:12:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.376 09:12:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.376 09:12:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.376 09:12:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.376 09:12:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.376 09:12:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.376 09:12:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.376 09:12:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.376 09:12:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.376 "name": "raid_bdev1", 00:13:27.376 "uuid": "d5578320-1387-4053-9682-f9e67506fd37", 00:13:27.376 "strip_size_kb": 0, 00:13:27.376 "state": "online", 00:13:27.376 "raid_level": "raid1", 00:13:27.376 "superblock": true, 00:13:27.376 "num_base_bdevs": 2, 00:13:27.376 "num_base_bdevs_discovered": 1, 00:13:27.376 "num_base_bdevs_operational": 1, 00:13:27.376 "base_bdevs_list": [ 00:13:27.376 { 00:13:27.376 "name": null, 00:13:27.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.376 "is_configured": false, 00:13:27.376 "data_offset": 0, 00:13:27.376 "data_size": 63488 00:13:27.376 }, 00:13:27.376 { 00:13:27.376 "name": "BaseBdev2", 00:13:27.376 "uuid": "72a71b80-06e0-53b0-b268-cc872c00015c", 00:13:27.376 "is_configured": true, 00:13:27.376 "data_offset": 2048, 00:13:27.376 "data_size": 63488 00:13:27.376 } 00:13:27.376 ] 00:13:27.376 }' 00:13:27.376 09:12:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.376 09:12:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.945 09:12:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:27.945 09:12:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.945 09:12:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.945 [2024-10-15 09:12:45.651139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:27.945 [2024-10-15 09:12:45.651328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.945 [2024-10-15 09:12:45.651358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:27.945 [2024-10-15 09:12:45.651367] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.945 [2024-10-15 09:12:45.651904] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.945 [2024-10-15 09:12:45.651935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:27.945 [2024-10-15 09:12:45.652040] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:27.945 [2024-10-15 09:12:45.652053] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:27.945 [2024-10-15 09:12:45.652068] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:27.945 [2024-10-15 09:12:45.652099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:27.945 [2024-10-15 09:12:45.670773] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:13:27.945 spare 00:13:27.945 09:12:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.945 09:12:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:27.945 [2024-10-15 09:12:45.672793] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:28.885 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:28.885 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.885 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:28.885 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:28.885 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.885 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.885 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.885 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.885 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.885 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.885 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.885 "name": "raid_bdev1", 00:13:28.885 "uuid": "d5578320-1387-4053-9682-f9e67506fd37", 00:13:28.885 "strip_size_kb": 0, 00:13:28.885 "state": "online", 00:13:28.885 "raid_level": "raid1", 00:13:28.885 "superblock": true, 00:13:28.885 "num_base_bdevs": 2, 00:13:28.885 "num_base_bdevs_discovered": 2, 00:13:28.885 "num_base_bdevs_operational": 2, 00:13:28.885 "process": { 00:13:28.885 "type": "rebuild", 00:13:28.885 "target": "spare", 00:13:28.885 "progress": { 00:13:28.885 "blocks": 20480, 00:13:28.885 "percent": 32 00:13:28.885 } 00:13:28.885 }, 00:13:28.885 "base_bdevs_list": [ 00:13:28.885 { 00:13:28.885 "name": "spare", 00:13:28.885 "uuid": "0eef2ed4-a001-5e02-91fc-d8d6c74ebc11", 00:13:28.885 "is_configured": true, 00:13:28.885 "data_offset": 2048, 00:13:28.885 "data_size": 63488 00:13:28.885 }, 00:13:28.885 { 00:13:28.885 "name": "BaseBdev2", 00:13:28.885 "uuid": "72a71b80-06e0-53b0-b268-cc872c00015c", 00:13:28.885 "is_configured": true, 00:13:28.885 "data_offset": 2048, 00:13:28.885 "data_size": 63488 00:13:28.885 } 00:13:28.885 ] 00:13:28.885 }' 00:13:28.885 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.885 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:29.145 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.145 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:29.145 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:29.145 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.145 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.145 [2024-10-15 09:12:46.828635] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:29.145 [2024-10-15 09:12:46.878890] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:29.145 [2024-10-15 09:12:46.878993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.145 [2024-10-15 09:12:46.879008] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:29.145 [2024-10-15 09:12:46.879017] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:29.145 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.145 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:29.145 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.145 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.145 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.145 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.145 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:29.145 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.145 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.145 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.145 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.145 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.145 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.145 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.145 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.145 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.145 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.145 "name": "raid_bdev1", 00:13:29.145 "uuid": "d5578320-1387-4053-9682-f9e67506fd37", 00:13:29.145 "strip_size_kb": 0, 00:13:29.145 "state": "online", 00:13:29.145 "raid_level": "raid1", 00:13:29.145 "superblock": true, 00:13:29.145 "num_base_bdevs": 2, 00:13:29.145 "num_base_bdevs_discovered": 1, 00:13:29.145 "num_base_bdevs_operational": 1, 00:13:29.145 "base_bdevs_list": [ 00:13:29.145 { 00:13:29.145 "name": null, 00:13:29.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.145 "is_configured": false, 00:13:29.145 "data_offset": 0, 00:13:29.145 "data_size": 63488 00:13:29.145 }, 00:13:29.145 { 00:13:29.145 "name": "BaseBdev2", 00:13:29.145 "uuid": "72a71b80-06e0-53b0-b268-cc872c00015c", 00:13:29.145 "is_configured": true, 00:13:29.145 "data_offset": 2048, 00:13:29.145 "data_size": 63488 00:13:29.145 } 00:13:29.145 ] 00:13:29.145 }' 00:13:29.145 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.145 09:12:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.714 09:12:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:29.715 09:12:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.715 09:12:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:29.715 09:12:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:29.715 09:12:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.715 09:12:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.715 09:12:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.715 09:12:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.715 09:12:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.715 09:12:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.715 09:12:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.715 "name": "raid_bdev1", 00:13:29.715 "uuid": "d5578320-1387-4053-9682-f9e67506fd37", 00:13:29.715 "strip_size_kb": 0, 00:13:29.715 "state": "online", 00:13:29.715 "raid_level": "raid1", 00:13:29.715 "superblock": true, 00:13:29.715 "num_base_bdevs": 2, 00:13:29.715 "num_base_bdevs_discovered": 1, 00:13:29.715 "num_base_bdevs_operational": 1, 00:13:29.715 "base_bdevs_list": [ 00:13:29.715 { 00:13:29.715 "name": null, 00:13:29.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.715 "is_configured": false, 00:13:29.715 "data_offset": 0, 00:13:29.715 "data_size": 63488 00:13:29.715 }, 00:13:29.715 { 00:13:29.715 "name": "BaseBdev2", 00:13:29.715 "uuid": "72a71b80-06e0-53b0-b268-cc872c00015c", 00:13:29.715 "is_configured": true, 00:13:29.715 "data_offset": 2048, 00:13:29.715 "data_size": 63488 00:13:29.715 } 00:13:29.715 ] 00:13:29.715 }' 00:13:29.715 09:12:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.715 09:12:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:29.715 09:12:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.715 09:12:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:29.715 09:12:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:29.715 09:12:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.715 09:12:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.715 09:12:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.715 09:12:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:29.715 09:12:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.715 09:12:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.715 [2024-10-15 09:12:47.491397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:29.715 [2024-10-15 09:12:47.491485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.715 [2024-10-15 09:12:47.491507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:29.715 [2024-10-15 09:12:47.491520] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.715 [2024-10-15 09:12:47.492039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.715 [2024-10-15 09:12:47.492072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:29.715 [2024-10-15 09:12:47.492161] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:29.715 [2024-10-15 09:12:47.492180] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:29.715 [2024-10-15 09:12:47.492189] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:29.715 [2024-10-15 09:12:47.492209] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:29.715 BaseBdev1 00:13:29.715 09:12:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.715 09:12:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:30.665 09:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:30.665 09:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.665 09:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.665 09:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.665 09:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.665 09:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:30.665 09:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.665 09:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.665 09:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.665 09:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.665 09:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.666 09:12:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.666 09:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.666 09:12:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.666 09:12:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.666 09:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.666 "name": "raid_bdev1", 00:13:30.666 "uuid": "d5578320-1387-4053-9682-f9e67506fd37", 00:13:30.666 "strip_size_kb": 0, 00:13:30.666 "state": "online", 00:13:30.666 "raid_level": "raid1", 00:13:30.666 "superblock": true, 00:13:30.666 "num_base_bdevs": 2, 00:13:30.666 "num_base_bdevs_discovered": 1, 00:13:30.666 "num_base_bdevs_operational": 1, 00:13:30.666 "base_bdevs_list": [ 00:13:30.666 { 00:13:30.666 "name": null, 00:13:30.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.666 "is_configured": false, 00:13:30.666 "data_offset": 0, 00:13:30.666 "data_size": 63488 00:13:30.666 }, 00:13:30.666 { 00:13:30.666 "name": "BaseBdev2", 00:13:30.666 "uuid": "72a71b80-06e0-53b0-b268-cc872c00015c", 00:13:30.666 "is_configured": true, 00:13:30.666 "data_offset": 2048, 00:13:30.666 "data_size": 63488 00:13:30.666 } 00:13:30.666 ] 00:13:30.666 }' 00:13:30.666 09:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.666 09:12:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.236 09:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:31.236 09:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.236 09:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:31.236 09:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:31.236 09:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.236 09:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.236 09:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.236 09:12:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.236 09:12:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.236 09:12:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.236 09:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.236 "name": "raid_bdev1", 00:13:31.236 "uuid": "d5578320-1387-4053-9682-f9e67506fd37", 00:13:31.236 "strip_size_kb": 0, 00:13:31.236 "state": "online", 00:13:31.236 "raid_level": "raid1", 00:13:31.236 "superblock": true, 00:13:31.236 "num_base_bdevs": 2, 00:13:31.236 "num_base_bdevs_discovered": 1, 00:13:31.236 "num_base_bdevs_operational": 1, 00:13:31.236 "base_bdevs_list": [ 00:13:31.236 { 00:13:31.236 "name": null, 00:13:31.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.236 "is_configured": false, 00:13:31.236 "data_offset": 0, 00:13:31.236 "data_size": 63488 00:13:31.236 }, 00:13:31.236 { 00:13:31.236 "name": "BaseBdev2", 00:13:31.236 "uuid": "72a71b80-06e0-53b0-b268-cc872c00015c", 00:13:31.236 "is_configured": true, 00:13:31.236 "data_offset": 2048, 00:13:31.236 "data_size": 63488 00:13:31.236 } 00:13:31.236 ] 00:13:31.236 }' 00:13:31.236 09:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.236 09:12:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:31.236 09:12:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.236 09:12:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:31.236 09:12:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:31.236 09:12:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:13:31.236 09:12:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:31.236 09:12:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:31.236 09:12:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:31.236 09:12:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:31.236 09:12:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:31.236 09:12:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:31.236 09:12:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.236 09:12:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.236 [2024-10-15 09:12:49.104924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:31.236 [2024-10-15 09:12:49.105243] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:31.236 [2024-10-15 09:12:49.105302] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:31.236 request: 00:13:31.236 { 00:13:31.236 "base_bdev": "BaseBdev1", 00:13:31.236 "raid_bdev": "raid_bdev1", 00:13:31.236 "method": "bdev_raid_add_base_bdev", 00:13:31.236 "req_id": 1 00:13:31.236 } 00:13:31.236 Got JSON-RPC error response 00:13:31.236 response: 00:13:31.236 { 00:13:31.236 "code": -22, 00:13:31.236 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:31.236 } 00:13:31.236 09:12:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:31.236 09:12:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:13:31.236 09:12:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:31.236 09:12:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:31.236 09:12:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:31.236 09:12:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:32.612 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:32.612 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.612 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.612 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.612 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.612 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:32.612 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.612 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.612 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.612 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.612 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.612 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.612 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.612 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.612 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.612 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.612 "name": "raid_bdev1", 00:13:32.612 "uuid": "d5578320-1387-4053-9682-f9e67506fd37", 00:13:32.612 "strip_size_kb": 0, 00:13:32.612 "state": "online", 00:13:32.612 "raid_level": "raid1", 00:13:32.612 "superblock": true, 00:13:32.612 "num_base_bdevs": 2, 00:13:32.612 "num_base_bdevs_discovered": 1, 00:13:32.612 "num_base_bdevs_operational": 1, 00:13:32.612 "base_bdevs_list": [ 00:13:32.612 { 00:13:32.612 "name": null, 00:13:32.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.612 "is_configured": false, 00:13:32.612 "data_offset": 0, 00:13:32.612 "data_size": 63488 00:13:32.612 }, 00:13:32.612 { 00:13:32.612 "name": "BaseBdev2", 00:13:32.612 "uuid": "72a71b80-06e0-53b0-b268-cc872c00015c", 00:13:32.612 "is_configured": true, 00:13:32.612 "data_offset": 2048, 00:13:32.612 "data_size": 63488 00:13:32.612 } 00:13:32.612 ] 00:13:32.612 }' 00:13:32.612 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.612 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.870 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:32.870 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.870 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:32.870 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:32.870 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.870 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.870 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.870 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.870 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.870 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.870 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.870 "name": "raid_bdev1", 00:13:32.870 "uuid": "d5578320-1387-4053-9682-f9e67506fd37", 00:13:32.870 "strip_size_kb": 0, 00:13:32.870 "state": "online", 00:13:32.870 "raid_level": "raid1", 00:13:32.870 "superblock": true, 00:13:32.870 "num_base_bdevs": 2, 00:13:32.870 "num_base_bdevs_discovered": 1, 00:13:32.870 "num_base_bdevs_operational": 1, 00:13:32.870 "base_bdevs_list": [ 00:13:32.870 { 00:13:32.870 "name": null, 00:13:32.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.870 "is_configured": false, 00:13:32.870 "data_offset": 0, 00:13:32.870 "data_size": 63488 00:13:32.870 }, 00:13:32.870 { 00:13:32.870 "name": "BaseBdev2", 00:13:32.870 "uuid": "72a71b80-06e0-53b0-b268-cc872c00015c", 00:13:32.870 "is_configured": true, 00:13:32.870 "data_offset": 2048, 00:13:32.870 "data_size": 63488 00:13:32.870 } 00:13:32.870 ] 00:13:32.870 }' 00:13:32.870 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.870 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:32.870 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.870 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:32.870 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77003 00:13:32.870 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 77003 ']' 00:13:32.870 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 77003 00:13:32.870 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:13:32.870 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:32.870 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77003 00:13:33.129 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:33.129 killing process with pid 77003 00:13:33.129 Received shutdown signal, test time was about 18.548974 seconds 00:13:33.129 00:13:33.129 Latency(us) 00:13:33.129 [2024-10-15T09:12:51.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.129 [2024-10-15T09:12:51.025Z] =================================================================================================================== 00:13:33.129 [2024-10-15T09:12:51.025Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:33.129 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:33.129 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77003' 00:13:33.129 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 77003 00:13:33.129 [2024-10-15 09:12:50.767853] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:33.129 09:12:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 77003 00:13:33.129 [2024-10-15 09:12:50.768007] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:33.129 [2024-10-15 09:12:50.768081] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:33.129 [2024-10-15 09:12:50.768092] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:33.129 [2024-10-15 09:12:51.015063] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:34.508 ************************************ 00:13:34.508 END TEST raid_rebuild_test_sb_io 00:13:34.508 ************************************ 00:13:34.508 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:34.508 00:13:34.508 real 0m21.889s 00:13:34.508 user 0m28.225s 00:13:34.508 sys 0m2.563s 00:13:34.508 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:34.508 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.508 09:12:52 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:34.508 09:12:52 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:34.508 09:12:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:34.508 09:12:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:34.508 09:12:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:34.508 ************************************ 00:13:34.508 START TEST raid_rebuild_test 00:13:34.509 ************************************ 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77723 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77723 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 77723 ']' 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:34.509 09:12:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.769 [2024-10-15 09:12:52.463496] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:13:34.769 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:34.769 Zero copy mechanism will not be used. 00:13:34.769 [2024-10-15 09:12:52.463811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77723 ] 00:13:34.769 [2024-10-15 09:12:52.620781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.029 [2024-10-15 09:12:52.742209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.288 [2024-10-15 09:12:52.945828] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:35.288 [2024-10-15 09:12:52.945904] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:35.547 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:35.547 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:13:35.547 09:12:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:35.547 09:12:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:35.547 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.547 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.547 BaseBdev1_malloc 00:13:35.547 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.547 09:12:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:35.547 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.547 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.547 [2024-10-15 09:12:53.434348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:35.547 [2024-10-15 09:12:53.434438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.547 [2024-10-15 09:12:53.434464] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:35.547 [2024-10-15 09:12:53.434475] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.547 [2024-10-15 09:12:53.436585] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.547 [2024-10-15 09:12:53.436626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:35.547 BaseBdev1 00:13:35.547 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.547 09:12:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:35.547 09:12:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:35.547 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.547 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.807 BaseBdev2_malloc 00:13:35.807 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.807 09:12:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:35.807 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.807 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.807 [2024-10-15 09:12:53.490977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:35.807 [2024-10-15 09:12:53.491060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.807 [2024-10-15 09:12:53.491080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:35.807 [2024-10-15 09:12:53.491092] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.807 [2024-10-15 09:12:53.493246] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.807 [2024-10-15 09:12:53.493381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:35.807 BaseBdev2 00:13:35.807 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.807 09:12:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:35.807 09:12:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:35.807 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.807 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.807 BaseBdev3_malloc 00:13:35.807 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.807 09:12:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:35.807 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.807 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.807 [2024-10-15 09:12:53.559740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:35.807 [2024-10-15 09:12:53.559806] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.807 [2024-10-15 09:12:53.559830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:35.807 [2024-10-15 09:12:53.559841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.807 [2024-10-15 09:12:53.561927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.807 [2024-10-15 09:12:53.561971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:35.807 BaseBdev3 00:13:35.807 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.807 09:12:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:35.807 09:12:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:35.807 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.807 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.807 BaseBdev4_malloc 00:13:35.807 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.807 09:12:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:35.807 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.807 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.807 [2024-10-15 09:12:53.615265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:35.807 [2024-10-15 09:12:53.615341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.807 [2024-10-15 09:12:53.615363] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:35.807 [2024-10-15 09:12:53.615374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.807 [2024-10-15 09:12:53.617487] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.807 [2024-10-15 09:12:53.617533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:35.807 BaseBdev4 00:13:35.807 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.807 09:12:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:35.807 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.807 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.807 spare_malloc 00:13:35.808 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.808 09:12:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:35.808 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.808 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.808 spare_delay 00:13:35.808 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.808 09:12:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:35.808 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.808 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.808 [2024-10-15 09:12:53.687113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:35.808 [2024-10-15 09:12:53.687195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.808 [2024-10-15 09:12:53.687218] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:35.808 [2024-10-15 09:12:53.687231] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.808 [2024-10-15 09:12:53.689499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.808 [2024-10-15 09:12:53.689542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:35.808 spare 00:13:35.808 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.808 09:12:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:35.808 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.808 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.808 [2024-10-15 09:12:53.699127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:35.808 [2024-10-15 09:12:53.701011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:35.808 [2024-10-15 09:12:53.701080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:35.808 [2024-10-15 09:12:53.701136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:35.808 [2024-10-15 09:12:53.701217] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:35.808 [2024-10-15 09:12:53.701230] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:35.808 [2024-10-15 09:12:53.701489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:35.808 [2024-10-15 09:12:53.701655] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:35.808 [2024-10-15 09:12:53.701668] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:35.808 [2024-10-15 09:12:53.701846] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.068 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.068 09:12:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:36.068 09:12:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.068 09:12:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.068 09:12:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.068 09:12:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.068 09:12:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:36.068 09:12:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.069 09:12:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.069 09:12:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.069 09:12:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.069 09:12:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.069 09:12:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.069 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.069 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.069 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.069 09:12:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.069 "name": "raid_bdev1", 00:13:36.069 "uuid": "3a4849d5-07df-4468-87fe-71e81a73c939", 00:13:36.069 "strip_size_kb": 0, 00:13:36.069 "state": "online", 00:13:36.069 "raid_level": "raid1", 00:13:36.069 "superblock": false, 00:13:36.069 "num_base_bdevs": 4, 00:13:36.069 "num_base_bdevs_discovered": 4, 00:13:36.069 "num_base_bdevs_operational": 4, 00:13:36.069 "base_bdevs_list": [ 00:13:36.069 { 00:13:36.069 "name": "BaseBdev1", 00:13:36.069 "uuid": "53821550-5a92-5b45-856a-2f05e06c3522", 00:13:36.069 "is_configured": true, 00:13:36.069 "data_offset": 0, 00:13:36.069 "data_size": 65536 00:13:36.069 }, 00:13:36.069 { 00:13:36.069 "name": "BaseBdev2", 00:13:36.069 "uuid": "b3bdc9a0-6e08-539a-ba11-4e94a9df86d5", 00:13:36.069 "is_configured": true, 00:13:36.069 "data_offset": 0, 00:13:36.069 "data_size": 65536 00:13:36.069 }, 00:13:36.069 { 00:13:36.069 "name": "BaseBdev3", 00:13:36.069 "uuid": "eaf25cef-289a-5966-a51f-adc43edab43f", 00:13:36.069 "is_configured": true, 00:13:36.069 "data_offset": 0, 00:13:36.069 "data_size": 65536 00:13:36.069 }, 00:13:36.069 { 00:13:36.069 "name": "BaseBdev4", 00:13:36.069 "uuid": "20d4105d-09b7-54a6-93bd-23a732180bc8", 00:13:36.069 "is_configured": true, 00:13:36.069 "data_offset": 0, 00:13:36.069 "data_size": 65536 00:13:36.069 } 00:13:36.069 ] 00:13:36.069 }' 00:13:36.069 09:12:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.069 09:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.328 09:12:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:36.328 09:12:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:36.328 09:12:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.328 09:12:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.329 [2024-10-15 09:12:54.214628] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:36.588 09:12:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.588 09:12:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:36.588 09:12:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.588 09:12:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.588 09:12:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.588 09:12:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:36.588 09:12:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.588 09:12:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:36.588 09:12:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:36.588 09:12:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:36.588 09:12:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:36.588 09:12:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:36.588 09:12:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:36.588 09:12:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:36.588 09:12:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:36.588 09:12:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:36.588 09:12:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:36.588 09:12:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:36.588 09:12:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:36.589 09:12:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:36.589 09:12:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:36.848 [2024-10-15 09:12:54.509880] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:36.848 /dev/nbd0 00:13:36.848 09:12:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:36.848 09:12:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:36.848 09:12:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:36.848 09:12:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:36.848 09:12:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:36.848 09:12:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:36.848 09:12:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:36.848 09:12:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:36.848 09:12:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:36.848 09:12:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:36.848 09:12:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:36.848 1+0 records in 00:13:36.848 1+0 records out 00:13:36.848 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000613688 s, 6.7 MB/s 00:13:36.848 09:12:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.848 09:12:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:36.848 09:12:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.848 09:12:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:36.848 09:12:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:36.848 09:12:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:36.848 09:12:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:36.848 09:12:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:36.848 09:12:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:36.848 09:12:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:43.429 65536+0 records in 00:13:43.429 65536+0 records out 00:13:43.429 33554432 bytes (34 MB, 32 MiB) copied, 6.57918 s, 5.1 MB/s 00:13:43.429 09:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:43.429 09:13:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:43.429 09:13:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:43.429 09:13:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:43.429 09:13:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:43.429 09:13:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:43.429 09:13:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:43.689 [2024-10-15 09:13:01.416941] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.689 09:13:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:43.689 09:13:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:43.689 09:13:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:43.689 09:13:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:43.689 09:13:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:43.689 09:13:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:43.689 09:13:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:43.689 09:13:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:43.689 09:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:43.689 09:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.689 09:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.689 [2024-10-15 09:13:01.442632] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:43.689 09:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.689 09:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:43.689 09:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.689 09:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.689 09:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.689 09:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.689 09:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:43.689 09:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.689 09:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.690 09:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.690 09:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.690 09:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.690 09:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.690 09:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.690 09:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.690 09:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.690 09:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.690 "name": "raid_bdev1", 00:13:43.690 "uuid": "3a4849d5-07df-4468-87fe-71e81a73c939", 00:13:43.690 "strip_size_kb": 0, 00:13:43.690 "state": "online", 00:13:43.690 "raid_level": "raid1", 00:13:43.690 "superblock": false, 00:13:43.690 "num_base_bdevs": 4, 00:13:43.690 "num_base_bdevs_discovered": 3, 00:13:43.690 "num_base_bdevs_operational": 3, 00:13:43.690 "base_bdevs_list": [ 00:13:43.690 { 00:13:43.690 "name": null, 00:13:43.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.690 "is_configured": false, 00:13:43.690 "data_offset": 0, 00:13:43.690 "data_size": 65536 00:13:43.690 }, 00:13:43.690 { 00:13:43.690 "name": "BaseBdev2", 00:13:43.690 "uuid": "b3bdc9a0-6e08-539a-ba11-4e94a9df86d5", 00:13:43.690 "is_configured": true, 00:13:43.690 "data_offset": 0, 00:13:43.690 "data_size": 65536 00:13:43.690 }, 00:13:43.690 { 00:13:43.690 "name": "BaseBdev3", 00:13:43.690 "uuid": "eaf25cef-289a-5966-a51f-adc43edab43f", 00:13:43.690 "is_configured": true, 00:13:43.690 "data_offset": 0, 00:13:43.690 "data_size": 65536 00:13:43.690 }, 00:13:43.690 { 00:13:43.690 "name": "BaseBdev4", 00:13:43.690 "uuid": "20d4105d-09b7-54a6-93bd-23a732180bc8", 00:13:43.690 "is_configured": true, 00:13:43.690 "data_offset": 0, 00:13:43.690 "data_size": 65536 00:13:43.690 } 00:13:43.690 ] 00:13:43.690 }' 00:13:43.690 09:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.690 09:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.258 09:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:44.258 09:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.258 09:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.258 [2024-10-15 09:13:01.901870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:44.258 [2024-10-15 09:13:01.918401] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:13:44.258 09:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.258 09:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:44.258 [2024-10-15 09:13:01.920576] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:45.192 09:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.192 09:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.192 09:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.192 09:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.192 09:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.192 09:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.192 09:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.192 09:13:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.192 09:13:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.192 09:13:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.192 09:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.192 "name": "raid_bdev1", 00:13:45.192 "uuid": "3a4849d5-07df-4468-87fe-71e81a73c939", 00:13:45.192 "strip_size_kb": 0, 00:13:45.192 "state": "online", 00:13:45.192 "raid_level": "raid1", 00:13:45.192 "superblock": false, 00:13:45.192 "num_base_bdevs": 4, 00:13:45.192 "num_base_bdevs_discovered": 4, 00:13:45.192 "num_base_bdevs_operational": 4, 00:13:45.192 "process": { 00:13:45.192 "type": "rebuild", 00:13:45.192 "target": "spare", 00:13:45.192 "progress": { 00:13:45.193 "blocks": 20480, 00:13:45.193 "percent": 31 00:13:45.193 } 00:13:45.193 }, 00:13:45.193 "base_bdevs_list": [ 00:13:45.193 { 00:13:45.193 "name": "spare", 00:13:45.193 "uuid": "2fe20d06-2732-5f52-aa63-75a6e2686f6b", 00:13:45.193 "is_configured": true, 00:13:45.193 "data_offset": 0, 00:13:45.193 "data_size": 65536 00:13:45.193 }, 00:13:45.193 { 00:13:45.193 "name": "BaseBdev2", 00:13:45.193 "uuid": "b3bdc9a0-6e08-539a-ba11-4e94a9df86d5", 00:13:45.193 "is_configured": true, 00:13:45.193 "data_offset": 0, 00:13:45.193 "data_size": 65536 00:13:45.193 }, 00:13:45.193 { 00:13:45.193 "name": "BaseBdev3", 00:13:45.193 "uuid": "eaf25cef-289a-5966-a51f-adc43edab43f", 00:13:45.193 "is_configured": true, 00:13:45.193 "data_offset": 0, 00:13:45.193 "data_size": 65536 00:13:45.193 }, 00:13:45.193 { 00:13:45.193 "name": "BaseBdev4", 00:13:45.193 "uuid": "20d4105d-09b7-54a6-93bd-23a732180bc8", 00:13:45.193 "is_configured": true, 00:13:45.193 "data_offset": 0, 00:13:45.193 "data_size": 65536 00:13:45.193 } 00:13:45.193 ] 00:13:45.193 }' 00:13:45.193 09:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.193 09:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.193 09:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.193 09:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.193 09:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:45.193 09:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.193 09:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.451 [2024-10-15 09:13:03.088027] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:45.451 [2024-10-15 09:13:03.127139] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:45.451 [2024-10-15 09:13:03.127241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.451 [2024-10-15 09:13:03.127262] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:45.451 [2024-10-15 09:13:03.127273] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:45.451 09:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.451 09:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:45.451 09:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.451 09:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.451 09:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.451 09:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.451 09:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:45.451 09:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.451 09:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.451 09:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.451 09:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.451 09:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.451 09:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.451 09:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.451 09:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.451 09:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.451 09:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.451 "name": "raid_bdev1", 00:13:45.451 "uuid": "3a4849d5-07df-4468-87fe-71e81a73c939", 00:13:45.451 "strip_size_kb": 0, 00:13:45.451 "state": "online", 00:13:45.451 "raid_level": "raid1", 00:13:45.451 "superblock": false, 00:13:45.451 "num_base_bdevs": 4, 00:13:45.451 "num_base_bdevs_discovered": 3, 00:13:45.451 "num_base_bdevs_operational": 3, 00:13:45.451 "base_bdevs_list": [ 00:13:45.451 { 00:13:45.451 "name": null, 00:13:45.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.451 "is_configured": false, 00:13:45.451 "data_offset": 0, 00:13:45.451 "data_size": 65536 00:13:45.451 }, 00:13:45.451 { 00:13:45.451 "name": "BaseBdev2", 00:13:45.451 "uuid": "b3bdc9a0-6e08-539a-ba11-4e94a9df86d5", 00:13:45.451 "is_configured": true, 00:13:45.451 "data_offset": 0, 00:13:45.451 "data_size": 65536 00:13:45.451 }, 00:13:45.451 { 00:13:45.451 "name": "BaseBdev3", 00:13:45.451 "uuid": "eaf25cef-289a-5966-a51f-adc43edab43f", 00:13:45.451 "is_configured": true, 00:13:45.451 "data_offset": 0, 00:13:45.451 "data_size": 65536 00:13:45.451 }, 00:13:45.451 { 00:13:45.451 "name": "BaseBdev4", 00:13:45.451 "uuid": "20d4105d-09b7-54a6-93bd-23a732180bc8", 00:13:45.451 "is_configured": true, 00:13:45.451 "data_offset": 0, 00:13:45.451 "data_size": 65536 00:13:45.451 } 00:13:45.451 ] 00:13:45.451 }' 00:13:45.451 09:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.451 09:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.017 09:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:46.017 09:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.017 09:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:46.017 09:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:46.017 09:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.017 09:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.017 09:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.017 09:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.017 09:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.017 09:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.017 09:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.017 "name": "raid_bdev1", 00:13:46.017 "uuid": "3a4849d5-07df-4468-87fe-71e81a73c939", 00:13:46.017 "strip_size_kb": 0, 00:13:46.017 "state": "online", 00:13:46.017 "raid_level": "raid1", 00:13:46.017 "superblock": false, 00:13:46.017 "num_base_bdevs": 4, 00:13:46.017 "num_base_bdevs_discovered": 3, 00:13:46.018 "num_base_bdevs_operational": 3, 00:13:46.018 "base_bdevs_list": [ 00:13:46.018 { 00:13:46.018 "name": null, 00:13:46.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.018 "is_configured": false, 00:13:46.018 "data_offset": 0, 00:13:46.018 "data_size": 65536 00:13:46.018 }, 00:13:46.018 { 00:13:46.018 "name": "BaseBdev2", 00:13:46.018 "uuid": "b3bdc9a0-6e08-539a-ba11-4e94a9df86d5", 00:13:46.018 "is_configured": true, 00:13:46.018 "data_offset": 0, 00:13:46.018 "data_size": 65536 00:13:46.018 }, 00:13:46.018 { 00:13:46.018 "name": "BaseBdev3", 00:13:46.018 "uuid": "eaf25cef-289a-5966-a51f-adc43edab43f", 00:13:46.018 "is_configured": true, 00:13:46.018 "data_offset": 0, 00:13:46.018 "data_size": 65536 00:13:46.018 }, 00:13:46.018 { 00:13:46.018 "name": "BaseBdev4", 00:13:46.018 "uuid": "20d4105d-09b7-54a6-93bd-23a732180bc8", 00:13:46.018 "is_configured": true, 00:13:46.018 "data_offset": 0, 00:13:46.018 "data_size": 65536 00:13:46.018 } 00:13:46.018 ] 00:13:46.018 }' 00:13:46.018 09:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.018 09:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:46.018 09:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.018 09:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:46.018 09:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:46.018 09:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.018 09:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.018 [2024-10-15 09:13:03.819406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:46.018 [2024-10-15 09:13:03.836488] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:13:46.018 09:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.018 09:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:46.018 [2024-10-15 09:13:03.839034] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:46.952 09:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:46.952 09:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.952 09:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:46.952 09:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:46.952 09:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.210 09:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.210 09:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.210 09:13:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.210 09:13:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.210 09:13:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.211 09:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.211 "name": "raid_bdev1", 00:13:47.211 "uuid": "3a4849d5-07df-4468-87fe-71e81a73c939", 00:13:47.211 "strip_size_kb": 0, 00:13:47.211 "state": "online", 00:13:47.211 "raid_level": "raid1", 00:13:47.211 "superblock": false, 00:13:47.211 "num_base_bdevs": 4, 00:13:47.211 "num_base_bdevs_discovered": 4, 00:13:47.211 "num_base_bdevs_operational": 4, 00:13:47.211 "process": { 00:13:47.211 "type": "rebuild", 00:13:47.211 "target": "spare", 00:13:47.211 "progress": { 00:13:47.211 "blocks": 20480, 00:13:47.211 "percent": 31 00:13:47.211 } 00:13:47.211 }, 00:13:47.211 "base_bdevs_list": [ 00:13:47.211 { 00:13:47.211 "name": "spare", 00:13:47.211 "uuid": "2fe20d06-2732-5f52-aa63-75a6e2686f6b", 00:13:47.211 "is_configured": true, 00:13:47.211 "data_offset": 0, 00:13:47.211 "data_size": 65536 00:13:47.211 }, 00:13:47.211 { 00:13:47.211 "name": "BaseBdev2", 00:13:47.211 "uuid": "b3bdc9a0-6e08-539a-ba11-4e94a9df86d5", 00:13:47.211 "is_configured": true, 00:13:47.211 "data_offset": 0, 00:13:47.211 "data_size": 65536 00:13:47.211 }, 00:13:47.211 { 00:13:47.211 "name": "BaseBdev3", 00:13:47.211 "uuid": "eaf25cef-289a-5966-a51f-adc43edab43f", 00:13:47.211 "is_configured": true, 00:13:47.211 "data_offset": 0, 00:13:47.211 "data_size": 65536 00:13:47.211 }, 00:13:47.211 { 00:13:47.211 "name": "BaseBdev4", 00:13:47.211 "uuid": "20d4105d-09b7-54a6-93bd-23a732180bc8", 00:13:47.211 "is_configured": true, 00:13:47.211 "data_offset": 0, 00:13:47.211 "data_size": 65536 00:13:47.211 } 00:13:47.211 ] 00:13:47.211 }' 00:13:47.211 09:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.211 09:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:47.211 09:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.211 09:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:47.211 09:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:47.211 09:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:47.211 09:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:47.211 09:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:47.211 09:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:47.211 09:13:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.211 09:13:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.211 [2024-10-15 09:13:04.981803] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:47.211 [2024-10-15 09:13:05.045524] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:13:47.211 09:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.211 09:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:47.211 09:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:47.211 09:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:47.211 09:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.211 09:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:47.211 09:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:47.211 09:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.211 09:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.211 09:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.211 09:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.211 09:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.211 09:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.470 09:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.470 "name": "raid_bdev1", 00:13:47.470 "uuid": "3a4849d5-07df-4468-87fe-71e81a73c939", 00:13:47.470 "strip_size_kb": 0, 00:13:47.470 "state": "online", 00:13:47.470 "raid_level": "raid1", 00:13:47.470 "superblock": false, 00:13:47.470 "num_base_bdevs": 4, 00:13:47.470 "num_base_bdevs_discovered": 3, 00:13:47.470 "num_base_bdevs_operational": 3, 00:13:47.470 "process": { 00:13:47.470 "type": "rebuild", 00:13:47.470 "target": "spare", 00:13:47.470 "progress": { 00:13:47.470 "blocks": 24576, 00:13:47.470 "percent": 37 00:13:47.470 } 00:13:47.470 }, 00:13:47.470 "base_bdevs_list": [ 00:13:47.470 { 00:13:47.470 "name": "spare", 00:13:47.470 "uuid": "2fe20d06-2732-5f52-aa63-75a6e2686f6b", 00:13:47.470 "is_configured": true, 00:13:47.470 "data_offset": 0, 00:13:47.470 "data_size": 65536 00:13:47.470 }, 00:13:47.470 { 00:13:47.470 "name": null, 00:13:47.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.470 "is_configured": false, 00:13:47.470 "data_offset": 0, 00:13:47.470 "data_size": 65536 00:13:47.470 }, 00:13:47.470 { 00:13:47.470 "name": "BaseBdev3", 00:13:47.470 "uuid": "eaf25cef-289a-5966-a51f-adc43edab43f", 00:13:47.470 "is_configured": true, 00:13:47.470 "data_offset": 0, 00:13:47.470 "data_size": 65536 00:13:47.470 }, 00:13:47.470 { 00:13:47.470 "name": "BaseBdev4", 00:13:47.470 "uuid": "20d4105d-09b7-54a6-93bd-23a732180bc8", 00:13:47.470 "is_configured": true, 00:13:47.470 "data_offset": 0, 00:13:47.470 "data_size": 65536 00:13:47.470 } 00:13:47.470 ] 00:13:47.470 }' 00:13:47.470 09:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.470 09:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:47.470 09:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.470 09:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:47.470 09:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=469 00:13:47.470 09:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:47.470 09:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:47.470 09:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.470 09:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:47.470 09:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:47.470 09:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.470 09:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.470 09:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.470 09:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.470 09:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.470 09:13:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.470 09:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.470 "name": "raid_bdev1", 00:13:47.470 "uuid": "3a4849d5-07df-4468-87fe-71e81a73c939", 00:13:47.470 "strip_size_kb": 0, 00:13:47.470 "state": "online", 00:13:47.470 "raid_level": "raid1", 00:13:47.470 "superblock": false, 00:13:47.470 "num_base_bdevs": 4, 00:13:47.470 "num_base_bdevs_discovered": 3, 00:13:47.470 "num_base_bdevs_operational": 3, 00:13:47.470 "process": { 00:13:47.470 "type": "rebuild", 00:13:47.470 "target": "spare", 00:13:47.470 "progress": { 00:13:47.470 "blocks": 26624, 00:13:47.470 "percent": 40 00:13:47.470 } 00:13:47.470 }, 00:13:47.470 "base_bdevs_list": [ 00:13:47.470 { 00:13:47.470 "name": "spare", 00:13:47.470 "uuid": "2fe20d06-2732-5f52-aa63-75a6e2686f6b", 00:13:47.470 "is_configured": true, 00:13:47.470 "data_offset": 0, 00:13:47.470 "data_size": 65536 00:13:47.470 }, 00:13:47.470 { 00:13:47.470 "name": null, 00:13:47.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.470 "is_configured": false, 00:13:47.470 "data_offset": 0, 00:13:47.470 "data_size": 65536 00:13:47.470 }, 00:13:47.470 { 00:13:47.470 "name": "BaseBdev3", 00:13:47.470 "uuid": "eaf25cef-289a-5966-a51f-adc43edab43f", 00:13:47.470 "is_configured": true, 00:13:47.470 "data_offset": 0, 00:13:47.470 "data_size": 65536 00:13:47.470 }, 00:13:47.470 { 00:13:47.470 "name": "BaseBdev4", 00:13:47.470 "uuid": "20d4105d-09b7-54a6-93bd-23a732180bc8", 00:13:47.470 "is_configured": true, 00:13:47.470 "data_offset": 0, 00:13:47.470 "data_size": 65536 00:13:47.470 } 00:13:47.470 ] 00:13:47.470 }' 00:13:47.470 09:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.470 09:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:47.470 09:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.470 09:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:47.470 09:13:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:48.848 09:13:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:48.848 09:13:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:48.848 09:13:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.848 09:13:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:48.848 09:13:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:48.848 09:13:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.848 09:13:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.848 09:13:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.848 09:13:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.848 09:13:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.848 09:13:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.848 09:13:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.848 "name": "raid_bdev1", 00:13:48.848 "uuid": "3a4849d5-07df-4468-87fe-71e81a73c939", 00:13:48.848 "strip_size_kb": 0, 00:13:48.848 "state": "online", 00:13:48.848 "raid_level": "raid1", 00:13:48.848 "superblock": false, 00:13:48.848 "num_base_bdevs": 4, 00:13:48.848 "num_base_bdevs_discovered": 3, 00:13:48.848 "num_base_bdevs_operational": 3, 00:13:48.848 "process": { 00:13:48.848 "type": "rebuild", 00:13:48.848 "target": "spare", 00:13:48.848 "progress": { 00:13:48.848 "blocks": 49152, 00:13:48.848 "percent": 75 00:13:48.848 } 00:13:48.848 }, 00:13:48.848 "base_bdevs_list": [ 00:13:48.848 { 00:13:48.848 "name": "spare", 00:13:48.848 "uuid": "2fe20d06-2732-5f52-aa63-75a6e2686f6b", 00:13:48.848 "is_configured": true, 00:13:48.848 "data_offset": 0, 00:13:48.848 "data_size": 65536 00:13:48.848 }, 00:13:48.848 { 00:13:48.848 "name": null, 00:13:48.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.848 "is_configured": false, 00:13:48.848 "data_offset": 0, 00:13:48.848 "data_size": 65536 00:13:48.848 }, 00:13:48.848 { 00:13:48.848 "name": "BaseBdev3", 00:13:48.848 "uuid": "eaf25cef-289a-5966-a51f-adc43edab43f", 00:13:48.848 "is_configured": true, 00:13:48.848 "data_offset": 0, 00:13:48.848 "data_size": 65536 00:13:48.848 }, 00:13:48.848 { 00:13:48.848 "name": "BaseBdev4", 00:13:48.848 "uuid": "20d4105d-09b7-54a6-93bd-23a732180bc8", 00:13:48.848 "is_configured": true, 00:13:48.848 "data_offset": 0, 00:13:48.848 "data_size": 65536 00:13:48.848 } 00:13:48.848 ] 00:13:48.848 }' 00:13:48.848 09:13:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.848 09:13:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:48.848 09:13:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.848 09:13:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:48.848 09:13:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:49.416 [2024-10-15 09:13:07.055878] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:49.416 [2024-10-15 09:13:07.056084] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:49.416 [2024-10-15 09:13:07.056151] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.731 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:49.731 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:49.731 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.731 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:49.731 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:49.731 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.731 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.731 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.731 09:13:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.731 09:13:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.731 09:13:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.731 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.731 "name": "raid_bdev1", 00:13:49.731 "uuid": "3a4849d5-07df-4468-87fe-71e81a73c939", 00:13:49.731 "strip_size_kb": 0, 00:13:49.731 "state": "online", 00:13:49.731 "raid_level": "raid1", 00:13:49.731 "superblock": false, 00:13:49.731 "num_base_bdevs": 4, 00:13:49.731 "num_base_bdevs_discovered": 3, 00:13:49.731 "num_base_bdevs_operational": 3, 00:13:49.731 "base_bdevs_list": [ 00:13:49.731 { 00:13:49.731 "name": "spare", 00:13:49.731 "uuid": "2fe20d06-2732-5f52-aa63-75a6e2686f6b", 00:13:49.731 "is_configured": true, 00:13:49.731 "data_offset": 0, 00:13:49.731 "data_size": 65536 00:13:49.731 }, 00:13:49.731 { 00:13:49.731 "name": null, 00:13:49.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.731 "is_configured": false, 00:13:49.731 "data_offset": 0, 00:13:49.731 "data_size": 65536 00:13:49.731 }, 00:13:49.731 { 00:13:49.731 "name": "BaseBdev3", 00:13:49.731 "uuid": "eaf25cef-289a-5966-a51f-adc43edab43f", 00:13:49.731 "is_configured": true, 00:13:49.731 "data_offset": 0, 00:13:49.731 "data_size": 65536 00:13:49.731 }, 00:13:49.731 { 00:13:49.731 "name": "BaseBdev4", 00:13:49.731 "uuid": "20d4105d-09b7-54a6-93bd-23a732180bc8", 00:13:49.731 "is_configured": true, 00:13:49.731 "data_offset": 0, 00:13:49.731 "data_size": 65536 00:13:49.731 } 00:13:49.731 ] 00:13:49.731 }' 00:13:49.731 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.731 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:49.731 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.731 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:49.731 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:49.731 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:49.731 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.731 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:49.732 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:49.732 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.992 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.992 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.992 09:13:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.992 09:13:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.992 09:13:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.992 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.992 "name": "raid_bdev1", 00:13:49.992 "uuid": "3a4849d5-07df-4468-87fe-71e81a73c939", 00:13:49.992 "strip_size_kb": 0, 00:13:49.992 "state": "online", 00:13:49.992 "raid_level": "raid1", 00:13:49.992 "superblock": false, 00:13:49.992 "num_base_bdevs": 4, 00:13:49.992 "num_base_bdevs_discovered": 3, 00:13:49.992 "num_base_bdevs_operational": 3, 00:13:49.992 "base_bdevs_list": [ 00:13:49.992 { 00:13:49.992 "name": "spare", 00:13:49.992 "uuid": "2fe20d06-2732-5f52-aa63-75a6e2686f6b", 00:13:49.992 "is_configured": true, 00:13:49.992 "data_offset": 0, 00:13:49.992 "data_size": 65536 00:13:49.992 }, 00:13:49.992 { 00:13:49.992 "name": null, 00:13:49.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.992 "is_configured": false, 00:13:49.992 "data_offset": 0, 00:13:49.992 "data_size": 65536 00:13:49.992 }, 00:13:49.992 { 00:13:49.992 "name": "BaseBdev3", 00:13:49.992 "uuid": "eaf25cef-289a-5966-a51f-adc43edab43f", 00:13:49.992 "is_configured": true, 00:13:49.992 "data_offset": 0, 00:13:49.992 "data_size": 65536 00:13:49.992 }, 00:13:49.992 { 00:13:49.992 "name": "BaseBdev4", 00:13:49.992 "uuid": "20d4105d-09b7-54a6-93bd-23a732180bc8", 00:13:49.992 "is_configured": true, 00:13:49.992 "data_offset": 0, 00:13:49.992 "data_size": 65536 00:13:49.992 } 00:13:49.992 ] 00:13:49.992 }' 00:13:49.992 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.992 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:49.992 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.992 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:49.992 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:49.992 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:49.992 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.992 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:49.992 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:49.992 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:49.992 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.992 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.992 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.992 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.992 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.993 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.993 09:13:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.993 09:13:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.993 09:13:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.993 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.993 "name": "raid_bdev1", 00:13:49.993 "uuid": "3a4849d5-07df-4468-87fe-71e81a73c939", 00:13:49.993 "strip_size_kb": 0, 00:13:49.993 "state": "online", 00:13:49.993 "raid_level": "raid1", 00:13:49.993 "superblock": false, 00:13:49.993 "num_base_bdevs": 4, 00:13:49.993 "num_base_bdevs_discovered": 3, 00:13:49.993 "num_base_bdevs_operational": 3, 00:13:49.993 "base_bdevs_list": [ 00:13:49.993 { 00:13:49.993 "name": "spare", 00:13:49.993 "uuid": "2fe20d06-2732-5f52-aa63-75a6e2686f6b", 00:13:49.993 "is_configured": true, 00:13:49.993 "data_offset": 0, 00:13:49.993 "data_size": 65536 00:13:49.993 }, 00:13:49.993 { 00:13:49.993 "name": null, 00:13:49.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.993 "is_configured": false, 00:13:49.993 "data_offset": 0, 00:13:49.993 "data_size": 65536 00:13:49.993 }, 00:13:49.993 { 00:13:49.993 "name": "BaseBdev3", 00:13:49.993 "uuid": "eaf25cef-289a-5966-a51f-adc43edab43f", 00:13:49.993 "is_configured": true, 00:13:49.993 "data_offset": 0, 00:13:49.993 "data_size": 65536 00:13:49.993 }, 00:13:49.993 { 00:13:49.993 "name": "BaseBdev4", 00:13:49.993 "uuid": "20d4105d-09b7-54a6-93bd-23a732180bc8", 00:13:49.993 "is_configured": true, 00:13:49.993 "data_offset": 0, 00:13:49.993 "data_size": 65536 00:13:49.993 } 00:13:49.993 ] 00:13:49.993 }' 00:13:49.993 09:13:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.993 09:13:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.568 09:13:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:50.568 09:13:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.568 09:13:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.568 [2024-10-15 09:13:08.210843] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:50.568 [2024-10-15 09:13:08.210895] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:50.568 [2024-10-15 09:13:08.210985] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:50.568 [2024-10-15 09:13:08.211069] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:50.568 [2024-10-15 09:13:08.211079] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:50.568 09:13:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.568 09:13:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.568 09:13:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.568 09:13:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.568 09:13:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:50.568 09:13:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.568 09:13:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:50.568 09:13:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:50.568 09:13:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:50.568 09:13:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:50.568 09:13:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:50.568 09:13:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:50.568 09:13:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:50.568 09:13:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:50.568 09:13:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:50.568 09:13:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:50.568 09:13:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:50.568 09:13:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:50.568 09:13:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:50.827 /dev/nbd0 00:13:50.827 09:13:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:50.827 09:13:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:50.827 09:13:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:50.827 09:13:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:50.827 09:13:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:50.827 09:13:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:50.827 09:13:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:50.827 09:13:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:50.827 09:13:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:50.827 09:13:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:50.827 09:13:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:50.827 1+0 records in 00:13:50.827 1+0 records out 00:13:50.827 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000554722 s, 7.4 MB/s 00:13:50.827 09:13:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.827 09:13:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:50.827 09:13:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.827 09:13:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:50.827 09:13:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:50.827 09:13:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:50.827 09:13:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:50.827 09:13:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:51.087 /dev/nbd1 00:13:51.087 09:13:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:51.087 09:13:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:51.087 09:13:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:51.087 09:13:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:51.087 09:13:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:51.087 09:13:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:51.087 09:13:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:51.087 09:13:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:51.087 09:13:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:51.087 09:13:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:51.087 09:13:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:51.087 1+0 records in 00:13:51.087 1+0 records out 00:13:51.087 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432546 s, 9.5 MB/s 00:13:51.087 09:13:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.087 09:13:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:51.087 09:13:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.087 09:13:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:51.087 09:13:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:51.087 09:13:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:51.087 09:13:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:51.087 09:13:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:51.346 09:13:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:51.346 09:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:51.346 09:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:51.346 09:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:51.346 09:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:51.346 09:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:51.346 09:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:51.606 09:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:51.606 09:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:51.606 09:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:51.606 09:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:51.606 09:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:51.606 09:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:51.606 09:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:51.606 09:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:51.606 09:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:51.606 09:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:51.866 09:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:51.866 09:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:51.866 09:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:51.866 09:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:51.866 09:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:51.866 09:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:51.866 09:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:51.866 09:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:51.866 09:13:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:51.866 09:13:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77723 00:13:51.866 09:13:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 77723 ']' 00:13:51.866 09:13:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 77723 00:13:51.866 09:13:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:13:51.866 09:13:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:51.866 09:13:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77723 00:13:51.866 09:13:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:51.866 killing process with pid 77723 00:13:51.866 Received shutdown signal, test time was about 60.000000 seconds 00:13:51.866 00:13:51.866 Latency(us) 00:13:51.866 [2024-10-15T09:13:09.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:51.866 [2024-10-15T09:13:09.762Z] =================================================================================================================== 00:13:51.866 [2024-10-15T09:13:09.762Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:51.866 09:13:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:51.866 09:13:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77723' 00:13:51.866 09:13:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 77723 00:13:51.866 [2024-10-15 09:13:09.650474] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:51.866 09:13:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 77723 00:13:52.435 [2024-10-15 09:13:10.191299] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:53.820 00:13:53.820 real 0m19.115s 00:13:53.820 user 0m21.244s 00:13:53.820 sys 0m3.673s 00:13:53.820 ************************************ 00:13:53.820 END TEST raid_rebuild_test 00:13:53.820 ************************************ 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.820 09:13:11 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:13:53.820 09:13:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:53.820 09:13:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:53.820 09:13:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:53.820 ************************************ 00:13:53.820 START TEST raid_rebuild_test_sb 00:13:53.820 ************************************ 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78179 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78179 00:13:53.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 78179 ']' 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.820 09:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:53.820 [2024-10-15 09:13:11.650776] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:13:53.820 [2024-10-15 09:13:11.651043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78179 ] 00:13:53.820 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:53.820 Zero copy mechanism will not be used. 00:13:54.080 [2024-10-15 09:13:11.824404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.080 [2024-10-15 09:13:11.968346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.341 [2024-10-15 09:13:12.177656] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:54.341 [2024-10-15 09:13:12.177809] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:54.909 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.910 BaseBdev1_malloc 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.910 [2024-10-15 09:13:12.589315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:54.910 [2024-10-15 09:13:12.589510] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.910 [2024-10-15 09:13:12.589537] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:54.910 [2024-10-15 09:13:12.589550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.910 [2024-10-15 09:13:12.591868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.910 [2024-10-15 09:13:12.591916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:54.910 BaseBdev1 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.910 BaseBdev2_malloc 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.910 [2024-10-15 09:13:12.650510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:54.910 [2024-10-15 09:13:12.650694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.910 [2024-10-15 09:13:12.650719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:54.910 [2024-10-15 09:13:12.650731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.910 [2024-10-15 09:13:12.652985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.910 [2024-10-15 09:13:12.653035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:54.910 BaseBdev2 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.910 BaseBdev3_malloc 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.910 [2024-10-15 09:13:12.727441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:54.910 [2024-10-15 09:13:12.727529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.910 [2024-10-15 09:13:12.727551] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:54.910 [2024-10-15 09:13:12.727564] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.910 [2024-10-15 09:13:12.729848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.910 [2024-10-15 09:13:12.729894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:54.910 BaseBdev3 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.910 BaseBdev4_malloc 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.910 [2024-10-15 09:13:12.786206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:54.910 [2024-10-15 09:13:12.786292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.910 [2024-10-15 09:13:12.786320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:54.910 [2024-10-15 09:13:12.786333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.910 [2024-10-15 09:13:12.788503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.910 [2024-10-15 09:13:12.788547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:54.910 BaseBdev4 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.910 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.169 spare_malloc 00:13:55.169 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.169 09:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:55.169 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.169 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.169 spare_delay 00:13:55.169 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.169 09:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:55.169 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.169 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.169 [2024-10-15 09:13:12.856802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:55.169 [2024-10-15 09:13:12.856891] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.169 [2024-10-15 09:13:12.856918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:55.169 [2024-10-15 09:13:12.856929] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.169 [2024-10-15 09:13:12.859449] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.169 [2024-10-15 09:13:12.859516] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:55.169 spare 00:13:55.169 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.169 09:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:55.169 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.169 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.169 [2024-10-15 09:13:12.868843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:55.169 [2024-10-15 09:13:12.871031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:55.169 [2024-10-15 09:13:12.871116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:55.169 [2024-10-15 09:13:12.871179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:55.169 [2024-10-15 09:13:12.871407] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:55.169 [2024-10-15 09:13:12.871427] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:55.170 [2024-10-15 09:13:12.871753] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:55.170 [2024-10-15 09:13:12.872089] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:55.170 [2024-10-15 09:13:12.872109] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:55.170 [2024-10-15 09:13:12.872288] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.170 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.170 09:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:55.170 09:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.170 09:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.170 09:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.170 09:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.170 09:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:55.170 09:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.170 09:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.170 09:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.170 09:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.170 09:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.170 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.170 09:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.170 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.170 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.170 09:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.170 "name": "raid_bdev1", 00:13:55.170 "uuid": "be097ef0-fbcb-4f4d-9c96-d5533276081e", 00:13:55.170 "strip_size_kb": 0, 00:13:55.170 "state": "online", 00:13:55.170 "raid_level": "raid1", 00:13:55.170 "superblock": true, 00:13:55.170 "num_base_bdevs": 4, 00:13:55.170 "num_base_bdevs_discovered": 4, 00:13:55.170 "num_base_bdevs_operational": 4, 00:13:55.170 "base_bdevs_list": [ 00:13:55.170 { 00:13:55.170 "name": "BaseBdev1", 00:13:55.170 "uuid": "75d84e2f-835a-59c2-9969-75f6a340d5f2", 00:13:55.170 "is_configured": true, 00:13:55.170 "data_offset": 2048, 00:13:55.170 "data_size": 63488 00:13:55.170 }, 00:13:55.170 { 00:13:55.170 "name": "BaseBdev2", 00:13:55.170 "uuid": "d73d39e8-f370-5a1d-a59d-336fe05876ac", 00:13:55.170 "is_configured": true, 00:13:55.170 "data_offset": 2048, 00:13:55.170 "data_size": 63488 00:13:55.170 }, 00:13:55.170 { 00:13:55.170 "name": "BaseBdev3", 00:13:55.170 "uuid": "af8866e8-ae3f-5c17-ac8b-e6a1d5603739", 00:13:55.170 "is_configured": true, 00:13:55.170 "data_offset": 2048, 00:13:55.170 "data_size": 63488 00:13:55.170 }, 00:13:55.170 { 00:13:55.170 "name": "BaseBdev4", 00:13:55.170 "uuid": "8b8d3e51-d7b0-5982-b8f4-cff21ecb28a9", 00:13:55.170 "is_configured": true, 00:13:55.170 "data_offset": 2048, 00:13:55.170 "data_size": 63488 00:13:55.170 } 00:13:55.170 ] 00:13:55.170 }' 00:13:55.170 09:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.170 09:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.430 09:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:55.430 09:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:55.430 09:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.430 09:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.430 [2024-10-15 09:13:13.316393] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:55.690 09:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.690 09:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:55.690 09:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.690 09:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:55.690 09:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.690 09:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.690 09:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.690 09:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:55.690 09:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:55.690 09:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:55.690 09:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:55.690 09:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:55.690 09:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:55.690 09:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:55.690 09:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:55.690 09:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:55.690 09:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:55.690 09:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:55.690 09:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:55.690 09:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:55.690 09:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:55.951 [2024-10-15 09:13:13.627587] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:55.951 /dev/nbd0 00:13:55.951 09:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:55.951 09:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:55.951 09:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:55.951 09:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:55.951 09:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:55.951 09:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:55.951 09:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:55.951 09:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:55.951 09:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:55.951 09:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:55.951 09:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:55.951 1+0 records in 00:13:55.951 1+0 records out 00:13:55.951 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000661872 s, 6.2 MB/s 00:13:55.951 09:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.951 09:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:55.951 09:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.951 09:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:55.951 09:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:55.951 09:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:55.951 09:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:55.951 09:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:55.951 09:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:55.951 09:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:02.573 63488+0 records in 00:14:02.573 63488+0 records out 00:14:02.573 32505856 bytes (33 MB, 31 MiB) copied, 6.44213 s, 5.0 MB/s 00:14:02.573 09:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:02.573 09:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:02.573 09:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:02.573 09:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:02.573 09:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:02.573 09:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:02.573 09:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:02.573 [2024-10-15 09:13:20.371428] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.573 09:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:02.573 09:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:02.573 09:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:02.573 09:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:02.573 09:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:02.573 09:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:02.573 09:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:02.573 09:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:02.573 09:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:02.573 09:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.573 09:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.573 [2024-10-15 09:13:20.412093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:02.573 09:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.573 09:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:02.573 09:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.573 09:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.573 09:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.573 09:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.573 09:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:02.573 09:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.573 09:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.573 09:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.573 09:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.573 09:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.573 09:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.573 09:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.573 09:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.573 09:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.890 09:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.890 "name": "raid_bdev1", 00:14:02.890 "uuid": "be097ef0-fbcb-4f4d-9c96-d5533276081e", 00:14:02.890 "strip_size_kb": 0, 00:14:02.890 "state": "online", 00:14:02.890 "raid_level": "raid1", 00:14:02.890 "superblock": true, 00:14:02.890 "num_base_bdevs": 4, 00:14:02.890 "num_base_bdevs_discovered": 3, 00:14:02.890 "num_base_bdevs_operational": 3, 00:14:02.890 "base_bdevs_list": [ 00:14:02.890 { 00:14:02.890 "name": null, 00:14:02.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.890 "is_configured": false, 00:14:02.890 "data_offset": 0, 00:14:02.890 "data_size": 63488 00:14:02.890 }, 00:14:02.890 { 00:14:02.890 "name": "BaseBdev2", 00:14:02.890 "uuid": "d73d39e8-f370-5a1d-a59d-336fe05876ac", 00:14:02.890 "is_configured": true, 00:14:02.890 "data_offset": 2048, 00:14:02.890 "data_size": 63488 00:14:02.890 }, 00:14:02.890 { 00:14:02.890 "name": "BaseBdev3", 00:14:02.890 "uuid": "af8866e8-ae3f-5c17-ac8b-e6a1d5603739", 00:14:02.890 "is_configured": true, 00:14:02.890 "data_offset": 2048, 00:14:02.890 "data_size": 63488 00:14:02.890 }, 00:14:02.890 { 00:14:02.890 "name": "BaseBdev4", 00:14:02.890 "uuid": "8b8d3e51-d7b0-5982-b8f4-cff21ecb28a9", 00:14:02.890 "is_configured": true, 00:14:02.890 "data_offset": 2048, 00:14:02.890 "data_size": 63488 00:14:02.890 } 00:14:02.890 ] 00:14:02.890 }' 00:14:02.890 09:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.890 09:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.149 09:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:03.149 09:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.149 09:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.149 [2024-10-15 09:13:20.891319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:03.149 [2024-10-15 09:13:20.907952] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:14:03.149 09:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.149 09:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:03.149 [2024-10-15 09:13:20.910043] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:04.089 09:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.089 09:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.089 09:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.089 09:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.089 09:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.089 09:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.089 09:13:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.089 09:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.089 09:13:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.089 09:13:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.089 09:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.089 "name": "raid_bdev1", 00:14:04.089 "uuid": "be097ef0-fbcb-4f4d-9c96-d5533276081e", 00:14:04.089 "strip_size_kb": 0, 00:14:04.089 "state": "online", 00:14:04.089 "raid_level": "raid1", 00:14:04.089 "superblock": true, 00:14:04.089 "num_base_bdevs": 4, 00:14:04.089 "num_base_bdevs_discovered": 4, 00:14:04.089 "num_base_bdevs_operational": 4, 00:14:04.089 "process": { 00:14:04.089 "type": "rebuild", 00:14:04.089 "target": "spare", 00:14:04.089 "progress": { 00:14:04.089 "blocks": 20480, 00:14:04.089 "percent": 32 00:14:04.089 } 00:14:04.089 }, 00:14:04.089 "base_bdevs_list": [ 00:14:04.089 { 00:14:04.089 "name": "spare", 00:14:04.089 "uuid": "07ce5a1d-28bb-5c50-832a-9a91fdeb6815", 00:14:04.089 "is_configured": true, 00:14:04.089 "data_offset": 2048, 00:14:04.089 "data_size": 63488 00:14:04.089 }, 00:14:04.089 { 00:14:04.089 "name": "BaseBdev2", 00:14:04.089 "uuid": "d73d39e8-f370-5a1d-a59d-336fe05876ac", 00:14:04.089 "is_configured": true, 00:14:04.089 "data_offset": 2048, 00:14:04.089 "data_size": 63488 00:14:04.089 }, 00:14:04.089 { 00:14:04.089 "name": "BaseBdev3", 00:14:04.089 "uuid": "af8866e8-ae3f-5c17-ac8b-e6a1d5603739", 00:14:04.089 "is_configured": true, 00:14:04.089 "data_offset": 2048, 00:14:04.089 "data_size": 63488 00:14:04.089 }, 00:14:04.089 { 00:14:04.089 "name": "BaseBdev4", 00:14:04.089 "uuid": "8b8d3e51-d7b0-5982-b8f4-cff21ecb28a9", 00:14:04.089 "is_configured": true, 00:14:04.089 "data_offset": 2048, 00:14:04.089 "data_size": 63488 00:14:04.089 } 00:14:04.089 ] 00:14:04.089 }' 00:14:04.089 09:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.348 09:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.348 09:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.348 09:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.348 09:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:04.348 09:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.348 09:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.348 [2024-10-15 09:13:22.065819] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:04.348 [2024-10-15 09:13:22.116046] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:04.348 [2024-10-15 09:13:22.116235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.348 [2024-10-15 09:13:22.116256] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:04.348 [2024-10-15 09:13:22.116267] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:04.348 09:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.348 09:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:04.348 09:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.348 09:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.348 09:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.348 09:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.348 09:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.348 09:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.348 09:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.348 09:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.348 09:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.348 09:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.348 09:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.348 09:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.348 09:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.348 09:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.348 09:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.348 "name": "raid_bdev1", 00:14:04.348 "uuid": "be097ef0-fbcb-4f4d-9c96-d5533276081e", 00:14:04.348 "strip_size_kb": 0, 00:14:04.348 "state": "online", 00:14:04.348 "raid_level": "raid1", 00:14:04.348 "superblock": true, 00:14:04.348 "num_base_bdevs": 4, 00:14:04.348 "num_base_bdevs_discovered": 3, 00:14:04.348 "num_base_bdevs_operational": 3, 00:14:04.348 "base_bdevs_list": [ 00:14:04.348 { 00:14:04.348 "name": null, 00:14:04.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.348 "is_configured": false, 00:14:04.348 "data_offset": 0, 00:14:04.348 "data_size": 63488 00:14:04.348 }, 00:14:04.348 { 00:14:04.348 "name": "BaseBdev2", 00:14:04.348 "uuid": "d73d39e8-f370-5a1d-a59d-336fe05876ac", 00:14:04.348 "is_configured": true, 00:14:04.348 "data_offset": 2048, 00:14:04.348 "data_size": 63488 00:14:04.348 }, 00:14:04.348 { 00:14:04.348 "name": "BaseBdev3", 00:14:04.348 "uuid": "af8866e8-ae3f-5c17-ac8b-e6a1d5603739", 00:14:04.348 "is_configured": true, 00:14:04.348 "data_offset": 2048, 00:14:04.348 "data_size": 63488 00:14:04.348 }, 00:14:04.348 { 00:14:04.348 "name": "BaseBdev4", 00:14:04.348 "uuid": "8b8d3e51-d7b0-5982-b8f4-cff21ecb28a9", 00:14:04.348 "is_configured": true, 00:14:04.348 "data_offset": 2048, 00:14:04.349 "data_size": 63488 00:14:04.349 } 00:14:04.349 ] 00:14:04.349 }' 00:14:04.349 09:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.349 09:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.918 09:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:04.918 09:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.918 09:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:04.918 09:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:04.918 09:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.918 09:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.918 09:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.918 09:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.918 09:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.918 09:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.918 09:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.918 "name": "raid_bdev1", 00:14:04.918 "uuid": "be097ef0-fbcb-4f4d-9c96-d5533276081e", 00:14:04.918 "strip_size_kb": 0, 00:14:04.918 "state": "online", 00:14:04.918 "raid_level": "raid1", 00:14:04.918 "superblock": true, 00:14:04.918 "num_base_bdevs": 4, 00:14:04.918 "num_base_bdevs_discovered": 3, 00:14:04.918 "num_base_bdevs_operational": 3, 00:14:04.918 "base_bdevs_list": [ 00:14:04.918 { 00:14:04.918 "name": null, 00:14:04.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.918 "is_configured": false, 00:14:04.918 "data_offset": 0, 00:14:04.918 "data_size": 63488 00:14:04.918 }, 00:14:04.918 { 00:14:04.918 "name": "BaseBdev2", 00:14:04.918 "uuid": "d73d39e8-f370-5a1d-a59d-336fe05876ac", 00:14:04.918 "is_configured": true, 00:14:04.918 "data_offset": 2048, 00:14:04.918 "data_size": 63488 00:14:04.918 }, 00:14:04.918 { 00:14:04.918 "name": "BaseBdev3", 00:14:04.918 "uuid": "af8866e8-ae3f-5c17-ac8b-e6a1d5603739", 00:14:04.918 "is_configured": true, 00:14:04.918 "data_offset": 2048, 00:14:04.918 "data_size": 63488 00:14:04.918 }, 00:14:04.918 { 00:14:04.918 "name": "BaseBdev4", 00:14:04.919 "uuid": "8b8d3e51-d7b0-5982-b8f4-cff21ecb28a9", 00:14:04.919 "is_configured": true, 00:14:04.919 "data_offset": 2048, 00:14:04.919 "data_size": 63488 00:14:04.919 } 00:14:04.919 ] 00:14:04.919 }' 00:14:04.919 09:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.919 09:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:04.919 09:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.919 09:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:04.919 09:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:04.919 09:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.919 09:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.919 [2024-10-15 09:13:22.755389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:04.919 [2024-10-15 09:13:22.770719] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:14:04.919 09:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.919 09:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:04.919 [2024-10-15 09:13:22.772682] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:06.300 09:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:06.300 09:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.300 09:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:06.300 09:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:06.300 09:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.300 09:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.300 09:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.300 09:13:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.300 09:13:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.300 09:13:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.300 09:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.300 "name": "raid_bdev1", 00:14:06.300 "uuid": "be097ef0-fbcb-4f4d-9c96-d5533276081e", 00:14:06.300 "strip_size_kb": 0, 00:14:06.300 "state": "online", 00:14:06.300 "raid_level": "raid1", 00:14:06.300 "superblock": true, 00:14:06.300 "num_base_bdevs": 4, 00:14:06.300 "num_base_bdevs_discovered": 4, 00:14:06.300 "num_base_bdevs_operational": 4, 00:14:06.300 "process": { 00:14:06.300 "type": "rebuild", 00:14:06.300 "target": "spare", 00:14:06.300 "progress": { 00:14:06.300 "blocks": 20480, 00:14:06.300 "percent": 32 00:14:06.300 } 00:14:06.300 }, 00:14:06.300 "base_bdevs_list": [ 00:14:06.300 { 00:14:06.300 "name": "spare", 00:14:06.300 "uuid": "07ce5a1d-28bb-5c50-832a-9a91fdeb6815", 00:14:06.300 "is_configured": true, 00:14:06.300 "data_offset": 2048, 00:14:06.300 "data_size": 63488 00:14:06.300 }, 00:14:06.300 { 00:14:06.300 "name": "BaseBdev2", 00:14:06.300 "uuid": "d73d39e8-f370-5a1d-a59d-336fe05876ac", 00:14:06.300 "is_configured": true, 00:14:06.300 "data_offset": 2048, 00:14:06.300 "data_size": 63488 00:14:06.300 }, 00:14:06.300 { 00:14:06.300 "name": "BaseBdev3", 00:14:06.300 "uuid": "af8866e8-ae3f-5c17-ac8b-e6a1d5603739", 00:14:06.300 "is_configured": true, 00:14:06.300 "data_offset": 2048, 00:14:06.300 "data_size": 63488 00:14:06.300 }, 00:14:06.300 { 00:14:06.300 "name": "BaseBdev4", 00:14:06.300 "uuid": "8b8d3e51-d7b0-5982-b8f4-cff21ecb28a9", 00:14:06.300 "is_configured": true, 00:14:06.300 "data_offset": 2048, 00:14:06.300 "data_size": 63488 00:14:06.300 } 00:14:06.300 ] 00:14:06.300 }' 00:14:06.301 09:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.301 09:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:06.301 09:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.301 09:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:06.301 09:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:06.301 09:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:06.301 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:06.301 09:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:06.301 09:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:06.301 09:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:06.301 09:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:06.301 09:13:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.301 09:13:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.301 [2024-10-15 09:13:23.940338] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:06.301 [2024-10-15 09:13:24.078749] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:14:06.301 09:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.301 09:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:06.301 09:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:06.301 09:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:06.301 09:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.301 09:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:06.301 09:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:06.301 09:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.301 09:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.301 09:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.301 09:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.301 09:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.301 09:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.301 09:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.301 "name": "raid_bdev1", 00:14:06.301 "uuid": "be097ef0-fbcb-4f4d-9c96-d5533276081e", 00:14:06.301 "strip_size_kb": 0, 00:14:06.301 "state": "online", 00:14:06.301 "raid_level": "raid1", 00:14:06.301 "superblock": true, 00:14:06.301 "num_base_bdevs": 4, 00:14:06.301 "num_base_bdevs_discovered": 3, 00:14:06.301 "num_base_bdevs_operational": 3, 00:14:06.301 "process": { 00:14:06.301 "type": "rebuild", 00:14:06.301 "target": "spare", 00:14:06.301 "progress": { 00:14:06.301 "blocks": 24576, 00:14:06.301 "percent": 38 00:14:06.301 } 00:14:06.301 }, 00:14:06.301 "base_bdevs_list": [ 00:14:06.301 { 00:14:06.301 "name": "spare", 00:14:06.301 "uuid": "07ce5a1d-28bb-5c50-832a-9a91fdeb6815", 00:14:06.301 "is_configured": true, 00:14:06.301 "data_offset": 2048, 00:14:06.301 "data_size": 63488 00:14:06.301 }, 00:14:06.301 { 00:14:06.301 "name": null, 00:14:06.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.301 "is_configured": false, 00:14:06.301 "data_offset": 0, 00:14:06.301 "data_size": 63488 00:14:06.301 }, 00:14:06.301 { 00:14:06.301 "name": "BaseBdev3", 00:14:06.301 "uuid": "af8866e8-ae3f-5c17-ac8b-e6a1d5603739", 00:14:06.301 "is_configured": true, 00:14:06.301 "data_offset": 2048, 00:14:06.301 "data_size": 63488 00:14:06.301 }, 00:14:06.301 { 00:14:06.301 "name": "BaseBdev4", 00:14:06.301 "uuid": "8b8d3e51-d7b0-5982-b8f4-cff21ecb28a9", 00:14:06.301 "is_configured": true, 00:14:06.301 "data_offset": 2048, 00:14:06.301 "data_size": 63488 00:14:06.301 } 00:14:06.301 ] 00:14:06.301 }' 00:14:06.301 09:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.301 09:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:06.301 09:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.562 09:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:06.562 09:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=488 00:14:06.562 09:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:06.562 09:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:06.562 09:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.562 09:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:06.562 09:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:06.562 09:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.562 09:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.562 09:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.562 09:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.562 09:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.562 09:13:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.562 09:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.562 "name": "raid_bdev1", 00:14:06.562 "uuid": "be097ef0-fbcb-4f4d-9c96-d5533276081e", 00:14:06.562 "strip_size_kb": 0, 00:14:06.562 "state": "online", 00:14:06.562 "raid_level": "raid1", 00:14:06.562 "superblock": true, 00:14:06.562 "num_base_bdevs": 4, 00:14:06.562 "num_base_bdevs_discovered": 3, 00:14:06.562 "num_base_bdevs_operational": 3, 00:14:06.562 "process": { 00:14:06.562 "type": "rebuild", 00:14:06.562 "target": "spare", 00:14:06.562 "progress": { 00:14:06.562 "blocks": 26624, 00:14:06.562 "percent": 41 00:14:06.562 } 00:14:06.562 }, 00:14:06.562 "base_bdevs_list": [ 00:14:06.562 { 00:14:06.562 "name": "spare", 00:14:06.562 "uuid": "07ce5a1d-28bb-5c50-832a-9a91fdeb6815", 00:14:06.562 "is_configured": true, 00:14:06.562 "data_offset": 2048, 00:14:06.562 "data_size": 63488 00:14:06.562 }, 00:14:06.562 { 00:14:06.562 "name": null, 00:14:06.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.562 "is_configured": false, 00:14:06.562 "data_offset": 0, 00:14:06.562 "data_size": 63488 00:14:06.562 }, 00:14:06.562 { 00:14:06.562 "name": "BaseBdev3", 00:14:06.562 "uuid": "af8866e8-ae3f-5c17-ac8b-e6a1d5603739", 00:14:06.562 "is_configured": true, 00:14:06.562 "data_offset": 2048, 00:14:06.562 "data_size": 63488 00:14:06.562 }, 00:14:06.562 { 00:14:06.562 "name": "BaseBdev4", 00:14:06.562 "uuid": "8b8d3e51-d7b0-5982-b8f4-cff21ecb28a9", 00:14:06.562 "is_configured": true, 00:14:06.562 "data_offset": 2048, 00:14:06.562 "data_size": 63488 00:14:06.562 } 00:14:06.562 ] 00:14:06.562 }' 00:14:06.562 09:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.562 09:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:06.562 09:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.562 09:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:06.562 09:13:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:07.503 09:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:07.503 09:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:07.503 09:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.503 09:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:07.503 09:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:07.503 09:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.503 09:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.503 09:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.503 09:13:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.503 09:13:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.763 09:13:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.763 09:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.763 "name": "raid_bdev1", 00:14:07.763 "uuid": "be097ef0-fbcb-4f4d-9c96-d5533276081e", 00:14:07.763 "strip_size_kb": 0, 00:14:07.763 "state": "online", 00:14:07.763 "raid_level": "raid1", 00:14:07.763 "superblock": true, 00:14:07.763 "num_base_bdevs": 4, 00:14:07.763 "num_base_bdevs_discovered": 3, 00:14:07.763 "num_base_bdevs_operational": 3, 00:14:07.763 "process": { 00:14:07.763 "type": "rebuild", 00:14:07.763 "target": "spare", 00:14:07.763 "progress": { 00:14:07.763 "blocks": 51200, 00:14:07.763 "percent": 80 00:14:07.763 } 00:14:07.763 }, 00:14:07.763 "base_bdevs_list": [ 00:14:07.763 { 00:14:07.763 "name": "spare", 00:14:07.763 "uuid": "07ce5a1d-28bb-5c50-832a-9a91fdeb6815", 00:14:07.763 "is_configured": true, 00:14:07.763 "data_offset": 2048, 00:14:07.763 "data_size": 63488 00:14:07.763 }, 00:14:07.763 { 00:14:07.763 "name": null, 00:14:07.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.763 "is_configured": false, 00:14:07.763 "data_offset": 0, 00:14:07.763 "data_size": 63488 00:14:07.763 }, 00:14:07.763 { 00:14:07.763 "name": "BaseBdev3", 00:14:07.763 "uuid": "af8866e8-ae3f-5c17-ac8b-e6a1d5603739", 00:14:07.763 "is_configured": true, 00:14:07.763 "data_offset": 2048, 00:14:07.763 "data_size": 63488 00:14:07.763 }, 00:14:07.763 { 00:14:07.763 "name": "BaseBdev4", 00:14:07.763 "uuid": "8b8d3e51-d7b0-5982-b8f4-cff21ecb28a9", 00:14:07.763 "is_configured": true, 00:14:07.763 "data_offset": 2048, 00:14:07.763 "data_size": 63488 00:14:07.763 } 00:14:07.763 ] 00:14:07.763 }' 00:14:07.763 09:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.763 09:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:07.763 09:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.763 09:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:07.763 09:13:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:08.332 [2024-10-15 09:13:25.988533] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:08.332 [2024-10-15 09:13:25.988632] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:08.332 [2024-10-15 09:13:25.988809] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.899 "name": "raid_bdev1", 00:14:08.899 "uuid": "be097ef0-fbcb-4f4d-9c96-d5533276081e", 00:14:08.899 "strip_size_kb": 0, 00:14:08.899 "state": "online", 00:14:08.899 "raid_level": "raid1", 00:14:08.899 "superblock": true, 00:14:08.899 "num_base_bdevs": 4, 00:14:08.899 "num_base_bdevs_discovered": 3, 00:14:08.899 "num_base_bdevs_operational": 3, 00:14:08.899 "base_bdevs_list": [ 00:14:08.899 { 00:14:08.899 "name": "spare", 00:14:08.899 "uuid": "07ce5a1d-28bb-5c50-832a-9a91fdeb6815", 00:14:08.899 "is_configured": true, 00:14:08.899 "data_offset": 2048, 00:14:08.899 "data_size": 63488 00:14:08.899 }, 00:14:08.899 { 00:14:08.899 "name": null, 00:14:08.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.899 "is_configured": false, 00:14:08.899 "data_offset": 0, 00:14:08.899 "data_size": 63488 00:14:08.899 }, 00:14:08.899 { 00:14:08.899 "name": "BaseBdev3", 00:14:08.899 "uuid": "af8866e8-ae3f-5c17-ac8b-e6a1d5603739", 00:14:08.899 "is_configured": true, 00:14:08.899 "data_offset": 2048, 00:14:08.899 "data_size": 63488 00:14:08.899 }, 00:14:08.899 { 00:14:08.899 "name": "BaseBdev4", 00:14:08.899 "uuid": "8b8d3e51-d7b0-5982-b8f4-cff21ecb28a9", 00:14:08.899 "is_configured": true, 00:14:08.899 "data_offset": 2048, 00:14:08.899 "data_size": 63488 00:14:08.899 } 00:14:08.899 ] 00:14:08.899 }' 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.899 "name": "raid_bdev1", 00:14:08.899 "uuid": "be097ef0-fbcb-4f4d-9c96-d5533276081e", 00:14:08.899 "strip_size_kb": 0, 00:14:08.899 "state": "online", 00:14:08.899 "raid_level": "raid1", 00:14:08.899 "superblock": true, 00:14:08.899 "num_base_bdevs": 4, 00:14:08.899 "num_base_bdevs_discovered": 3, 00:14:08.899 "num_base_bdevs_operational": 3, 00:14:08.899 "base_bdevs_list": [ 00:14:08.899 { 00:14:08.899 "name": "spare", 00:14:08.899 "uuid": "07ce5a1d-28bb-5c50-832a-9a91fdeb6815", 00:14:08.899 "is_configured": true, 00:14:08.899 "data_offset": 2048, 00:14:08.899 "data_size": 63488 00:14:08.899 }, 00:14:08.899 { 00:14:08.899 "name": null, 00:14:08.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.899 "is_configured": false, 00:14:08.899 "data_offset": 0, 00:14:08.899 "data_size": 63488 00:14:08.899 }, 00:14:08.899 { 00:14:08.899 "name": "BaseBdev3", 00:14:08.899 "uuid": "af8866e8-ae3f-5c17-ac8b-e6a1d5603739", 00:14:08.899 "is_configured": true, 00:14:08.899 "data_offset": 2048, 00:14:08.899 "data_size": 63488 00:14:08.899 }, 00:14:08.899 { 00:14:08.899 "name": "BaseBdev4", 00:14:08.899 "uuid": "8b8d3e51-d7b0-5982-b8f4-cff21ecb28a9", 00:14:08.899 "is_configured": true, 00:14:08.899 "data_offset": 2048, 00:14:08.899 "data_size": 63488 00:14:08.899 } 00:14:08.899 ] 00:14:08.899 }' 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.899 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.157 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.157 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.157 09:13:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.157 09:13:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.157 09:13:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.157 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.157 "name": "raid_bdev1", 00:14:09.157 "uuid": "be097ef0-fbcb-4f4d-9c96-d5533276081e", 00:14:09.157 "strip_size_kb": 0, 00:14:09.157 "state": "online", 00:14:09.157 "raid_level": "raid1", 00:14:09.157 "superblock": true, 00:14:09.157 "num_base_bdevs": 4, 00:14:09.157 "num_base_bdevs_discovered": 3, 00:14:09.157 "num_base_bdevs_operational": 3, 00:14:09.157 "base_bdevs_list": [ 00:14:09.157 { 00:14:09.157 "name": "spare", 00:14:09.157 "uuid": "07ce5a1d-28bb-5c50-832a-9a91fdeb6815", 00:14:09.157 "is_configured": true, 00:14:09.157 "data_offset": 2048, 00:14:09.157 "data_size": 63488 00:14:09.157 }, 00:14:09.157 { 00:14:09.157 "name": null, 00:14:09.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.157 "is_configured": false, 00:14:09.157 "data_offset": 0, 00:14:09.157 "data_size": 63488 00:14:09.157 }, 00:14:09.157 { 00:14:09.157 "name": "BaseBdev3", 00:14:09.157 "uuid": "af8866e8-ae3f-5c17-ac8b-e6a1d5603739", 00:14:09.157 "is_configured": true, 00:14:09.157 "data_offset": 2048, 00:14:09.157 "data_size": 63488 00:14:09.157 }, 00:14:09.157 { 00:14:09.157 "name": "BaseBdev4", 00:14:09.157 "uuid": "8b8d3e51-d7b0-5982-b8f4-cff21ecb28a9", 00:14:09.157 "is_configured": true, 00:14:09.157 "data_offset": 2048, 00:14:09.157 "data_size": 63488 00:14:09.157 } 00:14:09.157 ] 00:14:09.157 }' 00:14:09.157 09:13:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.157 09:13:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.723 09:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:09.723 09:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.723 09:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.723 [2024-10-15 09:13:27.329133] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:09.723 [2024-10-15 09:13:27.329282] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:09.723 [2024-10-15 09:13:27.329413] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:09.723 [2024-10-15 09:13:27.329547] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:09.723 [2024-10-15 09:13:27.329602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:09.723 09:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.723 09:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:09.723 09:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.723 09:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.723 09:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.723 09:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.723 09:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:09.723 09:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:09.723 09:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:09.723 09:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:09.723 09:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:09.723 09:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:09.723 09:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:09.723 09:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:09.723 09:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:09.723 09:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:09.723 09:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:09.723 09:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:09.723 09:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:09.981 /dev/nbd0 00:14:09.981 09:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:09.981 09:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:09.981 09:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:09.981 09:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:09.981 09:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:09.981 09:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:09.981 09:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:09.981 09:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:09.981 09:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:09.981 09:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:09.981 09:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:09.981 1+0 records in 00:14:09.981 1+0 records out 00:14:09.981 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000566223 s, 7.2 MB/s 00:14:09.981 09:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:09.981 09:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:09.981 09:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:09.981 09:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:09.981 09:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:09.981 09:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:09.981 09:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:09.981 09:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:10.261 /dev/nbd1 00:14:10.261 09:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:10.262 09:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:10.262 09:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:10.262 09:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:10.262 09:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:10.262 09:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:10.262 09:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:10.262 09:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:10.262 09:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:10.262 09:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:10.262 09:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:10.262 1+0 records in 00:14:10.262 1+0 records out 00:14:10.262 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000734234 s, 5.6 MB/s 00:14:10.262 09:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.262 09:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:10.262 09:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.262 09:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:10.262 09:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:10.262 09:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:10.262 09:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:10.262 09:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:10.525 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:10.525 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:10.525 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:10.525 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:10.525 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:10.525 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:10.525 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:10.525 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:10.784 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:10.784 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:10.784 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:10.784 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:10.784 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:10.784 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:10.784 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:10.784 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:10.784 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:10.784 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:10.784 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:10.784 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:10.784 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:10.784 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:10.784 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:10.784 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:10.784 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:10.784 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:10.784 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:10.784 09:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.784 09:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.784 09:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.784 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:10.784 09:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.784 09:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.043 [2024-10-15 09:13:28.685439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:11.043 [2024-10-15 09:13:28.685539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.043 [2024-10-15 09:13:28.685567] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:11.043 [2024-10-15 09:13:28.685578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.043 [2024-10-15 09:13:28.688134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.043 [2024-10-15 09:13:28.688300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:11.043 [2024-10-15 09:13:28.688424] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:11.043 [2024-10-15 09:13:28.688493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:11.043 [2024-10-15 09:13:28.688657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:11.043 [2024-10-15 09:13:28.688791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:11.043 spare 00:14:11.043 09:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.043 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:11.043 09:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.043 09:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.043 [2024-10-15 09:13:28.788729] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:11.043 [2024-10-15 09:13:28.788897] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:11.043 [2024-10-15 09:13:28.789417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:11.043 [2024-10-15 09:13:28.789735] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:11.043 [2024-10-15 09:13:28.789791] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:11.043 [2024-10-15 09:13:28.790095] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.043 09:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.043 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:11.043 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.043 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.043 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.043 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.043 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:11.043 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.043 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.043 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.043 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.043 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.043 09:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.043 09:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.043 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.043 09:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.043 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.043 "name": "raid_bdev1", 00:14:11.043 "uuid": "be097ef0-fbcb-4f4d-9c96-d5533276081e", 00:14:11.043 "strip_size_kb": 0, 00:14:11.043 "state": "online", 00:14:11.043 "raid_level": "raid1", 00:14:11.043 "superblock": true, 00:14:11.043 "num_base_bdevs": 4, 00:14:11.043 "num_base_bdevs_discovered": 3, 00:14:11.043 "num_base_bdevs_operational": 3, 00:14:11.043 "base_bdevs_list": [ 00:14:11.043 { 00:14:11.043 "name": "spare", 00:14:11.043 "uuid": "07ce5a1d-28bb-5c50-832a-9a91fdeb6815", 00:14:11.043 "is_configured": true, 00:14:11.043 "data_offset": 2048, 00:14:11.043 "data_size": 63488 00:14:11.043 }, 00:14:11.043 { 00:14:11.043 "name": null, 00:14:11.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.043 "is_configured": false, 00:14:11.043 "data_offset": 2048, 00:14:11.043 "data_size": 63488 00:14:11.043 }, 00:14:11.043 { 00:14:11.043 "name": "BaseBdev3", 00:14:11.043 "uuid": "af8866e8-ae3f-5c17-ac8b-e6a1d5603739", 00:14:11.043 "is_configured": true, 00:14:11.043 "data_offset": 2048, 00:14:11.043 "data_size": 63488 00:14:11.043 }, 00:14:11.043 { 00:14:11.043 "name": "BaseBdev4", 00:14:11.043 "uuid": "8b8d3e51-d7b0-5982-b8f4-cff21ecb28a9", 00:14:11.043 "is_configured": true, 00:14:11.043 "data_offset": 2048, 00:14:11.043 "data_size": 63488 00:14:11.043 } 00:14:11.043 ] 00:14:11.043 }' 00:14:11.043 09:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.043 09:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.610 09:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:11.610 09:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.610 09:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:11.610 09:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:11.610 09:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.610 09:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.610 09:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.610 09:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.610 09:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.610 09:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.610 09:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.610 "name": "raid_bdev1", 00:14:11.610 "uuid": "be097ef0-fbcb-4f4d-9c96-d5533276081e", 00:14:11.610 "strip_size_kb": 0, 00:14:11.610 "state": "online", 00:14:11.610 "raid_level": "raid1", 00:14:11.610 "superblock": true, 00:14:11.610 "num_base_bdevs": 4, 00:14:11.610 "num_base_bdevs_discovered": 3, 00:14:11.610 "num_base_bdevs_operational": 3, 00:14:11.610 "base_bdevs_list": [ 00:14:11.610 { 00:14:11.610 "name": "spare", 00:14:11.610 "uuid": "07ce5a1d-28bb-5c50-832a-9a91fdeb6815", 00:14:11.610 "is_configured": true, 00:14:11.610 "data_offset": 2048, 00:14:11.610 "data_size": 63488 00:14:11.610 }, 00:14:11.610 { 00:14:11.610 "name": null, 00:14:11.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.610 "is_configured": false, 00:14:11.610 "data_offset": 2048, 00:14:11.610 "data_size": 63488 00:14:11.610 }, 00:14:11.610 { 00:14:11.610 "name": "BaseBdev3", 00:14:11.610 "uuid": "af8866e8-ae3f-5c17-ac8b-e6a1d5603739", 00:14:11.610 "is_configured": true, 00:14:11.610 "data_offset": 2048, 00:14:11.610 "data_size": 63488 00:14:11.610 }, 00:14:11.610 { 00:14:11.610 "name": "BaseBdev4", 00:14:11.610 "uuid": "8b8d3e51-d7b0-5982-b8f4-cff21ecb28a9", 00:14:11.610 "is_configured": true, 00:14:11.610 "data_offset": 2048, 00:14:11.610 "data_size": 63488 00:14:11.610 } 00:14:11.610 ] 00:14:11.610 }' 00:14:11.610 09:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.610 09:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:11.610 09:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.610 09:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:11.610 09:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.610 09:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.610 09:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:11.610 09:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.868 09:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.868 09:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:11.868 09:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:11.868 09:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.868 09:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.868 [2024-10-15 09:13:29.537380] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:11.868 09:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.868 09:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:11.868 09:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.868 09:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.868 09:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.868 09:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.868 09:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:11.868 09:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.868 09:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.868 09:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.868 09:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.868 09:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.868 09:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.868 09:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.868 09:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.868 09:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.868 09:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.868 "name": "raid_bdev1", 00:14:11.868 "uuid": "be097ef0-fbcb-4f4d-9c96-d5533276081e", 00:14:11.868 "strip_size_kb": 0, 00:14:11.868 "state": "online", 00:14:11.868 "raid_level": "raid1", 00:14:11.868 "superblock": true, 00:14:11.868 "num_base_bdevs": 4, 00:14:11.868 "num_base_bdevs_discovered": 2, 00:14:11.868 "num_base_bdevs_operational": 2, 00:14:11.868 "base_bdevs_list": [ 00:14:11.868 { 00:14:11.868 "name": null, 00:14:11.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.868 "is_configured": false, 00:14:11.868 "data_offset": 0, 00:14:11.868 "data_size": 63488 00:14:11.868 }, 00:14:11.868 { 00:14:11.868 "name": null, 00:14:11.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.868 "is_configured": false, 00:14:11.868 "data_offset": 2048, 00:14:11.868 "data_size": 63488 00:14:11.868 }, 00:14:11.868 { 00:14:11.868 "name": "BaseBdev3", 00:14:11.868 "uuid": "af8866e8-ae3f-5c17-ac8b-e6a1d5603739", 00:14:11.868 "is_configured": true, 00:14:11.868 "data_offset": 2048, 00:14:11.868 "data_size": 63488 00:14:11.868 }, 00:14:11.868 { 00:14:11.868 "name": "BaseBdev4", 00:14:11.868 "uuid": "8b8d3e51-d7b0-5982-b8f4-cff21ecb28a9", 00:14:11.868 "is_configured": true, 00:14:11.868 "data_offset": 2048, 00:14:11.868 "data_size": 63488 00:14:11.868 } 00:14:11.868 ] 00:14:11.868 }' 00:14:11.868 09:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.868 09:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.126 09:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:12.126 09:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.126 09:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.126 [2024-10-15 09:13:29.969341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:12.126 [2024-10-15 09:13:29.969712] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:12.126 [2024-10-15 09:13:29.969783] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:12.126 [2024-10-15 09:13:29.969880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:12.126 [2024-10-15 09:13:29.988266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:14:12.126 09:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.126 09:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:12.126 [2024-10-15 09:13:29.990664] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:13.500 09:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:13.500 09:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.500 09:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:13.500 09:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:13.500 09:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.500 09:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.500 09:13:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.500 09:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.500 09:13:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.500 09:13:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.500 09:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.500 "name": "raid_bdev1", 00:14:13.500 "uuid": "be097ef0-fbcb-4f4d-9c96-d5533276081e", 00:14:13.500 "strip_size_kb": 0, 00:14:13.500 "state": "online", 00:14:13.500 "raid_level": "raid1", 00:14:13.500 "superblock": true, 00:14:13.500 "num_base_bdevs": 4, 00:14:13.500 "num_base_bdevs_discovered": 3, 00:14:13.500 "num_base_bdevs_operational": 3, 00:14:13.500 "process": { 00:14:13.500 "type": "rebuild", 00:14:13.500 "target": "spare", 00:14:13.500 "progress": { 00:14:13.500 "blocks": 20480, 00:14:13.500 "percent": 32 00:14:13.500 } 00:14:13.500 }, 00:14:13.500 "base_bdevs_list": [ 00:14:13.500 { 00:14:13.500 "name": "spare", 00:14:13.500 "uuid": "07ce5a1d-28bb-5c50-832a-9a91fdeb6815", 00:14:13.500 "is_configured": true, 00:14:13.500 "data_offset": 2048, 00:14:13.500 "data_size": 63488 00:14:13.500 }, 00:14:13.500 { 00:14:13.500 "name": null, 00:14:13.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.500 "is_configured": false, 00:14:13.500 "data_offset": 2048, 00:14:13.500 "data_size": 63488 00:14:13.500 }, 00:14:13.500 { 00:14:13.500 "name": "BaseBdev3", 00:14:13.500 "uuid": "af8866e8-ae3f-5c17-ac8b-e6a1d5603739", 00:14:13.500 "is_configured": true, 00:14:13.500 "data_offset": 2048, 00:14:13.500 "data_size": 63488 00:14:13.500 }, 00:14:13.500 { 00:14:13.500 "name": "BaseBdev4", 00:14:13.500 "uuid": "8b8d3e51-d7b0-5982-b8f4-cff21ecb28a9", 00:14:13.500 "is_configured": true, 00:14:13.500 "data_offset": 2048, 00:14:13.500 "data_size": 63488 00:14:13.500 } 00:14:13.500 ] 00:14:13.500 }' 00:14:13.500 09:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.500 09:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:13.500 09:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.500 09:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:13.500 09:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:13.500 09:13:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.500 09:13:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.500 [2024-10-15 09:13:31.157827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:13.500 [2024-10-15 09:13:31.197547] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:13.500 [2024-10-15 09:13:31.197643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.500 [2024-10-15 09:13:31.197668] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:13.500 [2024-10-15 09:13:31.197676] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:13.500 09:13:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.500 09:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:13.500 09:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.500 09:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.500 09:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:13.500 09:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:13.500 09:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:13.500 09:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.500 09:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.500 09:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.500 09:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.500 09:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.500 09:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.500 09:13:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.500 09:13:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.500 09:13:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.500 09:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.500 "name": "raid_bdev1", 00:14:13.500 "uuid": "be097ef0-fbcb-4f4d-9c96-d5533276081e", 00:14:13.500 "strip_size_kb": 0, 00:14:13.500 "state": "online", 00:14:13.500 "raid_level": "raid1", 00:14:13.500 "superblock": true, 00:14:13.500 "num_base_bdevs": 4, 00:14:13.500 "num_base_bdevs_discovered": 2, 00:14:13.500 "num_base_bdevs_operational": 2, 00:14:13.500 "base_bdevs_list": [ 00:14:13.500 { 00:14:13.500 "name": null, 00:14:13.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.500 "is_configured": false, 00:14:13.500 "data_offset": 0, 00:14:13.500 "data_size": 63488 00:14:13.500 }, 00:14:13.500 { 00:14:13.500 "name": null, 00:14:13.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.500 "is_configured": false, 00:14:13.500 "data_offset": 2048, 00:14:13.500 "data_size": 63488 00:14:13.500 }, 00:14:13.500 { 00:14:13.500 "name": "BaseBdev3", 00:14:13.500 "uuid": "af8866e8-ae3f-5c17-ac8b-e6a1d5603739", 00:14:13.500 "is_configured": true, 00:14:13.500 "data_offset": 2048, 00:14:13.500 "data_size": 63488 00:14:13.500 }, 00:14:13.500 { 00:14:13.500 "name": "BaseBdev4", 00:14:13.500 "uuid": "8b8d3e51-d7b0-5982-b8f4-cff21ecb28a9", 00:14:13.500 "is_configured": true, 00:14:13.500 "data_offset": 2048, 00:14:13.500 "data_size": 63488 00:14:13.500 } 00:14:13.500 ] 00:14:13.500 }' 00:14:13.500 09:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.500 09:13:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.772 09:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:13.772 09:13:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.772 09:13:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.772 [2024-10-15 09:13:31.662132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:13.772 [2024-10-15 09:13:31.662337] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.772 [2024-10-15 09:13:31.662388] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:13.772 [2024-10-15 09:13:31.662442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.772 [2024-10-15 09:13:31.663111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.772 [2024-10-15 09:13:31.663144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:13.772 [2024-10-15 09:13:31.663278] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:13.772 [2024-10-15 09:13:31.663293] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:13.772 [2024-10-15 09:13:31.663311] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:13.772 [2024-10-15 09:13:31.663338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:14.029 [2024-10-15 09:13:31.680298] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:14:14.029 spare 00:14:14.029 09:13:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.029 09:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:14.029 [2024-10-15 09:13:31.682633] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:14.965 09:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:14.965 09:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.965 09:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:14.965 09:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:14.965 09:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.965 09:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.965 09:13:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.965 09:13:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.965 09:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.965 09:13:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.965 09:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.965 "name": "raid_bdev1", 00:14:14.965 "uuid": "be097ef0-fbcb-4f4d-9c96-d5533276081e", 00:14:14.965 "strip_size_kb": 0, 00:14:14.965 "state": "online", 00:14:14.965 "raid_level": "raid1", 00:14:14.965 "superblock": true, 00:14:14.965 "num_base_bdevs": 4, 00:14:14.965 "num_base_bdevs_discovered": 3, 00:14:14.965 "num_base_bdevs_operational": 3, 00:14:14.965 "process": { 00:14:14.965 "type": "rebuild", 00:14:14.965 "target": "spare", 00:14:14.965 "progress": { 00:14:14.965 "blocks": 20480, 00:14:14.965 "percent": 32 00:14:14.965 } 00:14:14.965 }, 00:14:14.965 "base_bdevs_list": [ 00:14:14.965 { 00:14:14.965 "name": "spare", 00:14:14.965 "uuid": "07ce5a1d-28bb-5c50-832a-9a91fdeb6815", 00:14:14.965 "is_configured": true, 00:14:14.965 "data_offset": 2048, 00:14:14.965 "data_size": 63488 00:14:14.965 }, 00:14:14.965 { 00:14:14.965 "name": null, 00:14:14.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.965 "is_configured": false, 00:14:14.965 "data_offset": 2048, 00:14:14.965 "data_size": 63488 00:14:14.965 }, 00:14:14.965 { 00:14:14.965 "name": "BaseBdev3", 00:14:14.965 "uuid": "af8866e8-ae3f-5c17-ac8b-e6a1d5603739", 00:14:14.965 "is_configured": true, 00:14:14.965 "data_offset": 2048, 00:14:14.965 "data_size": 63488 00:14:14.965 }, 00:14:14.965 { 00:14:14.965 "name": "BaseBdev4", 00:14:14.965 "uuid": "8b8d3e51-d7b0-5982-b8f4-cff21ecb28a9", 00:14:14.965 "is_configured": true, 00:14:14.965 "data_offset": 2048, 00:14:14.965 "data_size": 63488 00:14:14.965 } 00:14:14.965 ] 00:14:14.965 }' 00:14:14.965 09:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.965 09:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:14.965 09:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.965 09:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:14.965 09:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:14.965 09:13:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.965 09:13:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.965 [2024-10-15 09:13:32.854041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:15.225 [2024-10-15 09:13:32.889211] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:15.225 [2024-10-15 09:13:32.889311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.225 [2024-10-15 09:13:32.889331] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:15.225 [2024-10-15 09:13:32.889342] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:15.225 09:13:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.225 09:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:15.225 09:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.225 09:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.225 09:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:15.225 09:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:15.225 09:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:15.225 09:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.225 09:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.225 09:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.225 09:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.225 09:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.225 09:13:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.225 09:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.225 09:13:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.225 09:13:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.225 09:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.225 "name": "raid_bdev1", 00:14:15.225 "uuid": "be097ef0-fbcb-4f4d-9c96-d5533276081e", 00:14:15.225 "strip_size_kb": 0, 00:14:15.225 "state": "online", 00:14:15.225 "raid_level": "raid1", 00:14:15.225 "superblock": true, 00:14:15.225 "num_base_bdevs": 4, 00:14:15.225 "num_base_bdevs_discovered": 2, 00:14:15.225 "num_base_bdevs_operational": 2, 00:14:15.225 "base_bdevs_list": [ 00:14:15.225 { 00:14:15.225 "name": null, 00:14:15.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.225 "is_configured": false, 00:14:15.225 "data_offset": 0, 00:14:15.225 "data_size": 63488 00:14:15.225 }, 00:14:15.225 { 00:14:15.225 "name": null, 00:14:15.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.225 "is_configured": false, 00:14:15.225 "data_offset": 2048, 00:14:15.225 "data_size": 63488 00:14:15.225 }, 00:14:15.225 { 00:14:15.225 "name": "BaseBdev3", 00:14:15.225 "uuid": "af8866e8-ae3f-5c17-ac8b-e6a1d5603739", 00:14:15.225 "is_configured": true, 00:14:15.225 "data_offset": 2048, 00:14:15.225 "data_size": 63488 00:14:15.225 }, 00:14:15.225 { 00:14:15.225 "name": "BaseBdev4", 00:14:15.225 "uuid": "8b8d3e51-d7b0-5982-b8f4-cff21ecb28a9", 00:14:15.225 "is_configured": true, 00:14:15.225 "data_offset": 2048, 00:14:15.225 "data_size": 63488 00:14:15.225 } 00:14:15.225 ] 00:14:15.225 }' 00:14:15.225 09:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.225 09:13:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.485 09:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:15.485 09:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.485 09:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:15.485 09:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:15.485 09:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.485 09:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.485 09:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.485 09:13:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.485 09:13:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.744 09:13:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.744 09:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.744 "name": "raid_bdev1", 00:14:15.744 "uuid": "be097ef0-fbcb-4f4d-9c96-d5533276081e", 00:14:15.744 "strip_size_kb": 0, 00:14:15.744 "state": "online", 00:14:15.744 "raid_level": "raid1", 00:14:15.744 "superblock": true, 00:14:15.744 "num_base_bdevs": 4, 00:14:15.744 "num_base_bdevs_discovered": 2, 00:14:15.744 "num_base_bdevs_operational": 2, 00:14:15.744 "base_bdevs_list": [ 00:14:15.744 { 00:14:15.744 "name": null, 00:14:15.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.744 "is_configured": false, 00:14:15.744 "data_offset": 0, 00:14:15.744 "data_size": 63488 00:14:15.744 }, 00:14:15.744 { 00:14:15.744 "name": null, 00:14:15.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.744 "is_configured": false, 00:14:15.744 "data_offset": 2048, 00:14:15.744 "data_size": 63488 00:14:15.744 }, 00:14:15.744 { 00:14:15.745 "name": "BaseBdev3", 00:14:15.745 "uuid": "af8866e8-ae3f-5c17-ac8b-e6a1d5603739", 00:14:15.745 "is_configured": true, 00:14:15.745 "data_offset": 2048, 00:14:15.745 "data_size": 63488 00:14:15.745 }, 00:14:15.745 { 00:14:15.745 "name": "BaseBdev4", 00:14:15.745 "uuid": "8b8d3e51-d7b0-5982-b8f4-cff21ecb28a9", 00:14:15.745 "is_configured": true, 00:14:15.745 "data_offset": 2048, 00:14:15.745 "data_size": 63488 00:14:15.745 } 00:14:15.745 ] 00:14:15.745 }' 00:14:15.745 09:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.745 09:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:15.745 09:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.745 09:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:15.745 09:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:15.745 09:13:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.745 09:13:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.745 09:13:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.745 09:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:15.745 09:13:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.745 09:13:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.745 [2024-10-15 09:13:33.539721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:15.745 [2024-10-15 09:13:33.539912] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.745 [2024-10-15 09:13:33.539942] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:14:15.745 [2024-10-15 09:13:33.539956] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.745 [2024-10-15 09:13:33.540493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.745 [2024-10-15 09:13:33.540518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:15.745 [2024-10-15 09:13:33.540618] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:15.745 [2024-10-15 09:13:33.540639] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:15.745 [2024-10-15 09:13:33.540649] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:15.745 [2024-10-15 09:13:33.540703] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:15.745 BaseBdev1 00:14:15.745 09:13:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.745 09:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:16.684 09:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:16.684 09:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.684 09:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.684 09:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.684 09:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.684 09:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:16.684 09:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.684 09:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.684 09:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.684 09:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.684 09:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.684 09:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.684 09:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.684 09:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.684 09:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.944 09:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.944 "name": "raid_bdev1", 00:14:16.944 "uuid": "be097ef0-fbcb-4f4d-9c96-d5533276081e", 00:14:16.944 "strip_size_kb": 0, 00:14:16.944 "state": "online", 00:14:16.944 "raid_level": "raid1", 00:14:16.944 "superblock": true, 00:14:16.944 "num_base_bdevs": 4, 00:14:16.944 "num_base_bdevs_discovered": 2, 00:14:16.944 "num_base_bdevs_operational": 2, 00:14:16.944 "base_bdevs_list": [ 00:14:16.944 { 00:14:16.944 "name": null, 00:14:16.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.944 "is_configured": false, 00:14:16.944 "data_offset": 0, 00:14:16.944 "data_size": 63488 00:14:16.944 }, 00:14:16.944 { 00:14:16.944 "name": null, 00:14:16.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.944 "is_configured": false, 00:14:16.944 "data_offset": 2048, 00:14:16.944 "data_size": 63488 00:14:16.944 }, 00:14:16.944 { 00:14:16.944 "name": "BaseBdev3", 00:14:16.944 "uuid": "af8866e8-ae3f-5c17-ac8b-e6a1d5603739", 00:14:16.944 "is_configured": true, 00:14:16.944 "data_offset": 2048, 00:14:16.944 "data_size": 63488 00:14:16.944 }, 00:14:16.944 { 00:14:16.944 "name": "BaseBdev4", 00:14:16.944 "uuid": "8b8d3e51-d7b0-5982-b8f4-cff21ecb28a9", 00:14:16.944 "is_configured": true, 00:14:16.944 "data_offset": 2048, 00:14:16.944 "data_size": 63488 00:14:16.944 } 00:14:16.944 ] 00:14:16.944 }' 00:14:16.944 09:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.944 09:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.203 09:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:17.203 09:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.203 09:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:17.203 09:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:17.203 09:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.203 09:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.203 09:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.203 09:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.203 09:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.203 09:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.203 09:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.203 "name": "raid_bdev1", 00:14:17.203 "uuid": "be097ef0-fbcb-4f4d-9c96-d5533276081e", 00:14:17.203 "strip_size_kb": 0, 00:14:17.203 "state": "online", 00:14:17.203 "raid_level": "raid1", 00:14:17.203 "superblock": true, 00:14:17.203 "num_base_bdevs": 4, 00:14:17.203 "num_base_bdevs_discovered": 2, 00:14:17.203 "num_base_bdevs_operational": 2, 00:14:17.203 "base_bdevs_list": [ 00:14:17.203 { 00:14:17.203 "name": null, 00:14:17.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.203 "is_configured": false, 00:14:17.203 "data_offset": 0, 00:14:17.203 "data_size": 63488 00:14:17.203 }, 00:14:17.203 { 00:14:17.203 "name": null, 00:14:17.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.203 "is_configured": false, 00:14:17.203 "data_offset": 2048, 00:14:17.203 "data_size": 63488 00:14:17.203 }, 00:14:17.203 { 00:14:17.203 "name": "BaseBdev3", 00:14:17.203 "uuid": "af8866e8-ae3f-5c17-ac8b-e6a1d5603739", 00:14:17.203 "is_configured": true, 00:14:17.203 "data_offset": 2048, 00:14:17.203 "data_size": 63488 00:14:17.203 }, 00:14:17.203 { 00:14:17.203 "name": "BaseBdev4", 00:14:17.203 "uuid": "8b8d3e51-d7b0-5982-b8f4-cff21ecb28a9", 00:14:17.203 "is_configured": true, 00:14:17.203 "data_offset": 2048, 00:14:17.203 "data_size": 63488 00:14:17.203 } 00:14:17.203 ] 00:14:17.203 }' 00:14:17.203 09:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.463 09:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:17.463 09:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.463 09:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:17.463 09:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:17.463 09:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:14:17.463 09:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:17.463 09:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:17.463 09:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:17.463 09:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:17.463 09:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:17.463 09:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:17.463 09:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.463 09:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.463 [2024-10-15 09:13:35.177373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:17.463 [2024-10-15 09:13:35.177745] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:17.463 [2024-10-15 09:13:35.177820] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:17.463 request: 00:14:17.463 { 00:14:17.463 "base_bdev": "BaseBdev1", 00:14:17.463 "raid_bdev": "raid_bdev1", 00:14:17.463 "method": "bdev_raid_add_base_bdev", 00:14:17.463 "req_id": 1 00:14:17.463 } 00:14:17.463 Got JSON-RPC error response 00:14:17.463 response: 00:14:17.463 { 00:14:17.463 "code": -22, 00:14:17.463 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:17.463 } 00:14:17.463 09:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:17.463 09:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:14:17.463 09:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:17.463 09:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:17.463 09:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:17.463 09:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:18.401 09:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:18.401 09:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.401 09:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.401 09:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.401 09:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.401 09:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:18.401 09:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.401 09:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.401 09:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.401 09:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.401 09:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.402 09:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.402 09:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.402 09:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.402 09:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.402 09:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.402 "name": "raid_bdev1", 00:14:18.402 "uuid": "be097ef0-fbcb-4f4d-9c96-d5533276081e", 00:14:18.402 "strip_size_kb": 0, 00:14:18.402 "state": "online", 00:14:18.402 "raid_level": "raid1", 00:14:18.402 "superblock": true, 00:14:18.402 "num_base_bdevs": 4, 00:14:18.402 "num_base_bdevs_discovered": 2, 00:14:18.402 "num_base_bdevs_operational": 2, 00:14:18.402 "base_bdevs_list": [ 00:14:18.402 { 00:14:18.402 "name": null, 00:14:18.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.402 "is_configured": false, 00:14:18.402 "data_offset": 0, 00:14:18.402 "data_size": 63488 00:14:18.402 }, 00:14:18.402 { 00:14:18.402 "name": null, 00:14:18.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.402 "is_configured": false, 00:14:18.402 "data_offset": 2048, 00:14:18.402 "data_size": 63488 00:14:18.402 }, 00:14:18.402 { 00:14:18.402 "name": "BaseBdev3", 00:14:18.402 "uuid": "af8866e8-ae3f-5c17-ac8b-e6a1d5603739", 00:14:18.402 "is_configured": true, 00:14:18.402 "data_offset": 2048, 00:14:18.402 "data_size": 63488 00:14:18.402 }, 00:14:18.402 { 00:14:18.402 "name": "BaseBdev4", 00:14:18.402 "uuid": "8b8d3e51-d7b0-5982-b8f4-cff21ecb28a9", 00:14:18.402 "is_configured": true, 00:14:18.402 "data_offset": 2048, 00:14:18.402 "data_size": 63488 00:14:18.402 } 00:14:18.402 ] 00:14:18.402 }' 00:14:18.402 09:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.402 09:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.968 09:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:18.968 09:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.968 09:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:18.968 09:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:18.968 09:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.968 09:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.968 09:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.968 09:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.968 09:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.968 09:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.968 09:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.968 "name": "raid_bdev1", 00:14:18.968 "uuid": "be097ef0-fbcb-4f4d-9c96-d5533276081e", 00:14:18.968 "strip_size_kb": 0, 00:14:18.968 "state": "online", 00:14:18.968 "raid_level": "raid1", 00:14:18.968 "superblock": true, 00:14:18.968 "num_base_bdevs": 4, 00:14:18.968 "num_base_bdevs_discovered": 2, 00:14:18.968 "num_base_bdevs_operational": 2, 00:14:18.968 "base_bdevs_list": [ 00:14:18.968 { 00:14:18.968 "name": null, 00:14:18.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.968 "is_configured": false, 00:14:18.968 "data_offset": 0, 00:14:18.968 "data_size": 63488 00:14:18.968 }, 00:14:18.968 { 00:14:18.968 "name": null, 00:14:18.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.968 "is_configured": false, 00:14:18.968 "data_offset": 2048, 00:14:18.968 "data_size": 63488 00:14:18.968 }, 00:14:18.968 { 00:14:18.968 "name": "BaseBdev3", 00:14:18.968 "uuid": "af8866e8-ae3f-5c17-ac8b-e6a1d5603739", 00:14:18.968 "is_configured": true, 00:14:18.968 "data_offset": 2048, 00:14:18.968 "data_size": 63488 00:14:18.968 }, 00:14:18.968 { 00:14:18.968 "name": "BaseBdev4", 00:14:18.968 "uuid": "8b8d3e51-d7b0-5982-b8f4-cff21ecb28a9", 00:14:18.968 "is_configured": true, 00:14:18.968 "data_offset": 2048, 00:14:18.968 "data_size": 63488 00:14:18.968 } 00:14:18.968 ] 00:14:18.968 }' 00:14:18.968 09:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.968 09:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:18.968 09:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.968 09:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:18.968 09:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78179 00:14:18.968 09:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 78179 ']' 00:14:18.968 09:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 78179 00:14:18.968 09:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:18.968 09:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:18.968 09:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78179 00:14:18.968 09:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:18.968 09:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:18.968 09:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78179' 00:14:18.968 killing process with pid 78179 00:14:18.968 Received shutdown signal, test time was about 60.000000 seconds 00:14:18.968 00:14:18.968 Latency(us) 00:14:18.968 [2024-10-15T09:13:36.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.968 [2024-10-15T09:13:36.864Z] =================================================================================================================== 00:14:18.968 [2024-10-15T09:13:36.864Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:18.968 09:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 78179 00:14:18.968 [2024-10-15 09:13:36.853665] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:18.968 [2024-10-15 09:13:36.853826] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:18.968 09:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 78179 00:14:18.968 [2024-10-15 09:13:36.853902] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:18.968 [2024-10-15 09:13:36.853914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:19.544 [2024-10-15 09:13:37.412579] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:20.976 00:14:20.976 real 0m27.124s 00:14:20.976 user 0m32.305s 00:14:20.976 sys 0m4.663s 00:14:20.976 ************************************ 00:14:20.976 END TEST raid_rebuild_test_sb 00:14:20.976 ************************************ 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.976 09:13:38 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:20.976 09:13:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:20.976 09:13:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:20.976 09:13:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:20.976 ************************************ 00:14:20.976 START TEST raid_rebuild_test_io 00:14:20.976 ************************************ 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78952 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78952 00:14:20.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 78952 ']' 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.976 09:13:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:21.235 [2024-10-15 09:13:38.886603] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:14:21.235 [2024-10-15 09:13:38.888163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78952 ] 00:14:21.235 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:21.235 Zero copy mechanism will not be used. 00:14:21.235 [2024-10-15 09:13:39.082084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.494 [2024-10-15 09:13:39.211824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.753 [2024-10-15 09:13:39.437681] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:21.753 [2024-10-15 09:13:39.437845] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:22.012 09:13:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:22.012 09:13:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:14:22.012 09:13:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:22.012 09:13:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:22.012 09:13:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.012 09:13:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.012 BaseBdev1_malloc 00:14:22.012 09:13:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.012 09:13:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:22.012 09:13:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.012 09:13:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.012 [2024-10-15 09:13:39.901134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:22.012 [2024-10-15 09:13:39.901349] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.012 [2024-10-15 09:13:39.901385] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:22.012 [2024-10-15 09:13:39.901410] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.012 [2024-10-15 09:13:39.903873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.012 [2024-10-15 09:13:39.903924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:22.012 BaseBdev1 00:14:22.012 09:13:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.012 09:13:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:22.012 09:13:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:22.012 09:13:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.012 09:13:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.272 BaseBdev2_malloc 00:14:22.272 09:13:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.272 09:13:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:22.272 09:13:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.272 09:13:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.272 [2024-10-15 09:13:39.960710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:22.272 [2024-10-15 09:13:39.960814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.272 [2024-10-15 09:13:39.960839] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:22.272 [2024-10-15 09:13:39.960852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.272 [2024-10-15 09:13:39.963312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.272 [2024-10-15 09:13:39.963361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:22.272 BaseBdev2 00:14:22.272 09:13:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.272 09:13:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:22.272 09:13:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:22.272 09:13:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.272 09:13:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.272 BaseBdev3_malloc 00:14:22.272 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.272 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:22.272 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.272 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.272 [2024-10-15 09:13:40.031351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:22.272 [2024-10-15 09:13:40.031430] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.272 [2024-10-15 09:13:40.031457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:22.272 [2024-10-15 09:13:40.031469] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.272 [2024-10-15 09:13:40.033827] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.272 [2024-10-15 09:13:40.033966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:22.272 BaseBdev3 00:14:22.272 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.272 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:22.272 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:22.272 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.272 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.272 BaseBdev4_malloc 00:14:22.272 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.272 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:22.272 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.272 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.272 [2024-10-15 09:13:40.091480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:22.272 [2024-10-15 09:13:40.091667] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.272 [2024-10-15 09:13:40.091729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:22.272 [2024-10-15 09:13:40.091744] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.272 [2024-10-15 09:13:40.094218] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.272 [2024-10-15 09:13:40.094273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:22.272 BaseBdev4 00:14:22.272 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.272 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:22.272 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.272 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.272 spare_malloc 00:14:22.272 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.272 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:22.272 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.272 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.272 spare_delay 00:14:22.272 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.272 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:22.272 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.272 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.272 [2024-10-15 09:13:40.165505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:22.272 [2024-10-15 09:13:40.165593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.272 [2024-10-15 09:13:40.165619] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:22.272 [2024-10-15 09:13:40.165631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.530 [2024-10-15 09:13:40.168044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.530 [2024-10-15 09:13:40.168091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:22.530 spare 00:14:22.530 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.530 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:22.530 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.530 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.530 [2024-10-15 09:13:40.177510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:22.530 [2024-10-15 09:13:40.179338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:22.530 [2024-10-15 09:13:40.179512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:22.530 [2024-10-15 09:13:40.179572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:22.530 [2024-10-15 09:13:40.179662] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:22.530 [2024-10-15 09:13:40.179675] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:22.530 [2024-10-15 09:13:40.179960] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:22.530 [2024-10-15 09:13:40.180164] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:22.530 [2024-10-15 09:13:40.180178] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:22.530 [2024-10-15 09:13:40.180364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.530 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.530 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:22.530 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:22.530 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.530 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.530 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.530 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:22.530 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.530 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.530 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.530 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.530 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.530 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.530 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.530 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.530 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.530 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.530 "name": "raid_bdev1", 00:14:22.530 "uuid": "169613be-9e28-43c0-8d76-a4ba885231af", 00:14:22.530 "strip_size_kb": 0, 00:14:22.530 "state": "online", 00:14:22.530 "raid_level": "raid1", 00:14:22.530 "superblock": false, 00:14:22.530 "num_base_bdevs": 4, 00:14:22.530 "num_base_bdevs_discovered": 4, 00:14:22.530 "num_base_bdevs_operational": 4, 00:14:22.530 "base_bdevs_list": [ 00:14:22.530 { 00:14:22.530 "name": "BaseBdev1", 00:14:22.530 "uuid": "4fe6cbee-6d79-5ec8-9768-794b438877c8", 00:14:22.530 "is_configured": true, 00:14:22.530 "data_offset": 0, 00:14:22.530 "data_size": 65536 00:14:22.530 }, 00:14:22.530 { 00:14:22.530 "name": "BaseBdev2", 00:14:22.530 "uuid": "e6d9dbc1-f102-5cbc-88c5-acf30db08c51", 00:14:22.530 "is_configured": true, 00:14:22.530 "data_offset": 0, 00:14:22.530 "data_size": 65536 00:14:22.530 }, 00:14:22.530 { 00:14:22.530 "name": "BaseBdev3", 00:14:22.530 "uuid": "8fadac17-9e3c-5e2e-8032-ba4f984eca32", 00:14:22.530 "is_configured": true, 00:14:22.530 "data_offset": 0, 00:14:22.530 "data_size": 65536 00:14:22.530 }, 00:14:22.530 { 00:14:22.530 "name": "BaseBdev4", 00:14:22.530 "uuid": "ee9448a7-5daf-5434-bb5f-ea2ea735b055", 00:14:22.530 "is_configured": true, 00:14:22.530 "data_offset": 0, 00:14:22.530 "data_size": 65536 00:14:22.530 } 00:14:22.530 ] 00:14:22.530 }' 00:14:22.530 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.530 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.788 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:22.788 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:22.788 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.788 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.788 [2024-10-15 09:13:40.645217] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:22.788 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.046 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:23.046 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:23.046 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.046 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.046 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.046 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.046 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:23.046 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:23.046 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:23.046 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.046 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:23.046 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.046 [2024-10-15 09:13:40.720671] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:23.046 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.046 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:23.046 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:23.046 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:23.046 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:23.046 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:23.046 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.046 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.046 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.046 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.046 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.046 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.046 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.046 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.046 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.046 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.046 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.046 "name": "raid_bdev1", 00:14:23.046 "uuid": "169613be-9e28-43c0-8d76-a4ba885231af", 00:14:23.046 "strip_size_kb": 0, 00:14:23.046 "state": "online", 00:14:23.046 "raid_level": "raid1", 00:14:23.046 "superblock": false, 00:14:23.046 "num_base_bdevs": 4, 00:14:23.046 "num_base_bdevs_discovered": 3, 00:14:23.046 "num_base_bdevs_operational": 3, 00:14:23.046 "base_bdevs_list": [ 00:14:23.046 { 00:14:23.046 "name": null, 00:14:23.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.046 "is_configured": false, 00:14:23.046 "data_offset": 0, 00:14:23.046 "data_size": 65536 00:14:23.046 }, 00:14:23.046 { 00:14:23.046 "name": "BaseBdev2", 00:14:23.046 "uuid": "e6d9dbc1-f102-5cbc-88c5-acf30db08c51", 00:14:23.046 "is_configured": true, 00:14:23.046 "data_offset": 0, 00:14:23.046 "data_size": 65536 00:14:23.046 }, 00:14:23.046 { 00:14:23.046 "name": "BaseBdev3", 00:14:23.046 "uuid": "8fadac17-9e3c-5e2e-8032-ba4f984eca32", 00:14:23.046 "is_configured": true, 00:14:23.046 "data_offset": 0, 00:14:23.046 "data_size": 65536 00:14:23.046 }, 00:14:23.046 { 00:14:23.046 "name": "BaseBdev4", 00:14:23.046 "uuid": "ee9448a7-5daf-5434-bb5f-ea2ea735b055", 00:14:23.046 "is_configured": true, 00:14:23.046 "data_offset": 0, 00:14:23.046 "data_size": 65536 00:14:23.046 } 00:14:23.046 ] 00:14:23.046 }' 00:14:23.046 09:13:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.046 09:13:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.046 [2024-10-15 09:13:40.824966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:23.046 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:23.046 Zero copy mechanism will not be used. 00:14:23.047 Running I/O for 60 seconds... 00:14:23.305 09:13:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:23.305 09:13:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.305 09:13:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.305 [2024-10-15 09:13:41.198506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:23.563 09:13:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.563 09:13:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:23.563 [2024-10-15 09:13:41.261691] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:23.563 [2024-10-15 09:13:41.263784] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:23.563 [2024-10-15 09:13:41.374023] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:23.563 [2024-10-15 09:13:41.374742] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:23.821 [2024-10-15 09:13:41.485065] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:23.821 [2024-10-15 09:13:41.485531] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:24.080 157.00 IOPS, 471.00 MiB/s [2024-10-15T09:13:41.976Z] [2024-10-15 09:13:41.835362] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:24.339 [2024-10-15 09:13:42.062313] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:24.597 09:13:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:24.597 09:13:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.597 09:13:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:24.597 09:13:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:24.597 09:13:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.597 09:13:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.597 09:13:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.597 09:13:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.597 09:13:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.597 09:13:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.597 09:13:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.597 "name": "raid_bdev1", 00:14:24.597 "uuid": "169613be-9e28-43c0-8d76-a4ba885231af", 00:14:24.597 "strip_size_kb": 0, 00:14:24.597 "state": "online", 00:14:24.597 "raid_level": "raid1", 00:14:24.597 "superblock": false, 00:14:24.597 "num_base_bdevs": 4, 00:14:24.597 "num_base_bdevs_discovered": 4, 00:14:24.597 "num_base_bdevs_operational": 4, 00:14:24.597 "process": { 00:14:24.597 "type": "rebuild", 00:14:24.597 "target": "spare", 00:14:24.597 "progress": { 00:14:24.597 "blocks": 12288, 00:14:24.597 "percent": 18 00:14:24.597 } 00:14:24.597 }, 00:14:24.597 "base_bdevs_list": [ 00:14:24.597 { 00:14:24.597 "name": "spare", 00:14:24.597 "uuid": "41558adb-0b33-501a-81a8-472909366ff8", 00:14:24.597 "is_configured": true, 00:14:24.597 "data_offset": 0, 00:14:24.597 "data_size": 65536 00:14:24.597 }, 00:14:24.597 { 00:14:24.597 "name": "BaseBdev2", 00:14:24.597 "uuid": "e6d9dbc1-f102-5cbc-88c5-acf30db08c51", 00:14:24.597 "is_configured": true, 00:14:24.597 "data_offset": 0, 00:14:24.597 "data_size": 65536 00:14:24.597 }, 00:14:24.597 { 00:14:24.597 "name": "BaseBdev3", 00:14:24.597 "uuid": "8fadac17-9e3c-5e2e-8032-ba4f984eca32", 00:14:24.597 "is_configured": true, 00:14:24.597 "data_offset": 0, 00:14:24.597 "data_size": 65536 00:14:24.597 }, 00:14:24.597 { 00:14:24.597 "name": "BaseBdev4", 00:14:24.597 "uuid": "ee9448a7-5daf-5434-bb5f-ea2ea735b055", 00:14:24.597 "is_configured": true, 00:14:24.597 "data_offset": 0, 00:14:24.597 "data_size": 65536 00:14:24.597 } 00:14:24.598 ] 00:14:24.598 }' 00:14:24.598 09:13:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.598 09:13:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:24.598 09:13:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.598 09:13:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:24.598 09:13:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:24.598 09:13:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.598 09:13:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.598 [2024-10-15 09:13:42.381828] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:24.858 [2024-10-15 09:13:42.523047] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:24.858 [2024-10-15 09:13:42.543561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.858 [2024-10-15 09:13:42.543638] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:24.858 [2024-10-15 09:13:42.543657] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:24.858 [2024-10-15 09:13:42.577243] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:24.858 09:13:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.858 09:13:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:24.858 09:13:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.858 09:13:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.858 09:13:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.858 09:13:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.858 09:13:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.858 09:13:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.858 09:13:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.858 09:13:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.858 09:13:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.858 09:13:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.858 09:13:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.858 09:13:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.858 09:13:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.858 09:13:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.858 09:13:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.858 "name": "raid_bdev1", 00:14:24.858 "uuid": "169613be-9e28-43c0-8d76-a4ba885231af", 00:14:24.858 "strip_size_kb": 0, 00:14:24.858 "state": "online", 00:14:24.858 "raid_level": "raid1", 00:14:24.858 "superblock": false, 00:14:24.858 "num_base_bdevs": 4, 00:14:24.858 "num_base_bdevs_discovered": 3, 00:14:24.858 "num_base_bdevs_operational": 3, 00:14:24.858 "base_bdevs_list": [ 00:14:24.858 { 00:14:24.858 "name": null, 00:14:24.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.858 "is_configured": false, 00:14:24.858 "data_offset": 0, 00:14:24.858 "data_size": 65536 00:14:24.858 }, 00:14:24.858 { 00:14:24.858 "name": "BaseBdev2", 00:14:24.858 "uuid": "e6d9dbc1-f102-5cbc-88c5-acf30db08c51", 00:14:24.858 "is_configured": true, 00:14:24.858 "data_offset": 0, 00:14:24.858 "data_size": 65536 00:14:24.858 }, 00:14:24.858 { 00:14:24.858 "name": "BaseBdev3", 00:14:24.858 "uuid": "8fadac17-9e3c-5e2e-8032-ba4f984eca32", 00:14:24.858 "is_configured": true, 00:14:24.858 "data_offset": 0, 00:14:24.858 "data_size": 65536 00:14:24.858 }, 00:14:24.858 { 00:14:24.858 "name": "BaseBdev4", 00:14:24.858 "uuid": "ee9448a7-5daf-5434-bb5f-ea2ea735b055", 00:14:24.858 "is_configured": true, 00:14:24.858 "data_offset": 0, 00:14:24.858 "data_size": 65536 00:14:24.858 } 00:14:24.858 ] 00:14:24.858 }' 00:14:24.858 09:13:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.858 09:13:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.373 133.50 IOPS, 400.50 MiB/s [2024-10-15T09:13:43.269Z] 09:13:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:25.373 09:13:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.373 09:13:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:25.373 09:13:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:25.373 09:13:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.373 09:13:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.373 09:13:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.373 09:13:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.373 09:13:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.373 09:13:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.373 09:13:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.373 "name": "raid_bdev1", 00:14:25.373 "uuid": "169613be-9e28-43c0-8d76-a4ba885231af", 00:14:25.373 "strip_size_kb": 0, 00:14:25.373 "state": "online", 00:14:25.373 "raid_level": "raid1", 00:14:25.373 "superblock": false, 00:14:25.373 "num_base_bdevs": 4, 00:14:25.373 "num_base_bdevs_discovered": 3, 00:14:25.373 "num_base_bdevs_operational": 3, 00:14:25.373 "base_bdevs_list": [ 00:14:25.373 { 00:14:25.373 "name": null, 00:14:25.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.373 "is_configured": false, 00:14:25.373 "data_offset": 0, 00:14:25.373 "data_size": 65536 00:14:25.373 }, 00:14:25.373 { 00:14:25.373 "name": "BaseBdev2", 00:14:25.373 "uuid": "e6d9dbc1-f102-5cbc-88c5-acf30db08c51", 00:14:25.373 "is_configured": true, 00:14:25.373 "data_offset": 0, 00:14:25.373 "data_size": 65536 00:14:25.373 }, 00:14:25.373 { 00:14:25.373 "name": "BaseBdev3", 00:14:25.373 "uuid": "8fadac17-9e3c-5e2e-8032-ba4f984eca32", 00:14:25.373 "is_configured": true, 00:14:25.373 "data_offset": 0, 00:14:25.373 "data_size": 65536 00:14:25.373 }, 00:14:25.373 { 00:14:25.373 "name": "BaseBdev4", 00:14:25.373 "uuid": "ee9448a7-5daf-5434-bb5f-ea2ea735b055", 00:14:25.373 "is_configured": true, 00:14:25.373 "data_offset": 0, 00:14:25.373 "data_size": 65536 00:14:25.373 } 00:14:25.373 ] 00:14:25.373 }' 00:14:25.373 09:13:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.373 09:13:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:25.373 09:13:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.373 09:13:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:25.373 09:13:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:25.373 09:13:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.373 09:13:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.631 [2024-10-15 09:13:43.283414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:25.631 09:13:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.631 09:13:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:25.631 [2024-10-15 09:13:43.365981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:25.631 [2024-10-15 09:13:43.368120] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:25.631 [2024-10-15 09:13:43.494085] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:25.631 [2024-10-15 09:13:43.495511] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:25.888 [2024-10-15 09:13:43.708122] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:25.888 [2024-10-15 09:13:43.709036] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:26.403 144.00 IOPS, 432.00 MiB/s [2024-10-15T09:13:44.299Z] [2024-10-15 09:13:44.191474] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:26.661 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:26.661 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.661 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:26.661 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:26.661 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.661 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.661 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.661 09:13:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.661 09:13:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.661 09:13:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.661 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.661 "name": "raid_bdev1", 00:14:26.661 "uuid": "169613be-9e28-43c0-8d76-a4ba885231af", 00:14:26.661 "strip_size_kb": 0, 00:14:26.661 "state": "online", 00:14:26.661 "raid_level": "raid1", 00:14:26.661 "superblock": false, 00:14:26.661 "num_base_bdevs": 4, 00:14:26.661 "num_base_bdevs_discovered": 4, 00:14:26.661 "num_base_bdevs_operational": 4, 00:14:26.661 "process": { 00:14:26.661 "type": "rebuild", 00:14:26.661 "target": "spare", 00:14:26.661 "progress": { 00:14:26.661 "blocks": 10240, 00:14:26.661 "percent": 15 00:14:26.661 } 00:14:26.661 }, 00:14:26.661 "base_bdevs_list": [ 00:14:26.661 { 00:14:26.661 "name": "spare", 00:14:26.661 "uuid": "41558adb-0b33-501a-81a8-472909366ff8", 00:14:26.661 "is_configured": true, 00:14:26.661 "data_offset": 0, 00:14:26.661 "data_size": 65536 00:14:26.661 }, 00:14:26.661 { 00:14:26.661 "name": "BaseBdev2", 00:14:26.661 "uuid": "e6d9dbc1-f102-5cbc-88c5-acf30db08c51", 00:14:26.661 "is_configured": true, 00:14:26.661 "data_offset": 0, 00:14:26.661 "data_size": 65536 00:14:26.661 }, 00:14:26.661 { 00:14:26.661 "name": "BaseBdev3", 00:14:26.661 "uuid": "8fadac17-9e3c-5e2e-8032-ba4f984eca32", 00:14:26.661 "is_configured": true, 00:14:26.661 "data_offset": 0, 00:14:26.661 "data_size": 65536 00:14:26.661 }, 00:14:26.661 { 00:14:26.661 "name": "BaseBdev4", 00:14:26.661 "uuid": "ee9448a7-5daf-5434-bb5f-ea2ea735b055", 00:14:26.661 "is_configured": true, 00:14:26.661 "data_offset": 0, 00:14:26.661 "data_size": 65536 00:14:26.661 } 00:14:26.661 ] 00:14:26.661 }' 00:14:26.661 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.661 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:26.661 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.661 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.661 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:26.661 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:26.661 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:26.661 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:26.661 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:26.662 09:13:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.662 09:13:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.662 [2024-10-15 09:13:44.481474] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:26.662 [2024-10-15 09:13:44.532832] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:26.662 [2024-10-15 09:13:44.534547] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:26.919 [2024-10-15 09:13:44.636244] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:26.919 [2024-10-15 09:13:44.636411] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:26.919 09:13:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.919 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:26.919 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:26.919 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:26.919 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.919 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:26.919 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:26.919 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.919 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.919 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.919 09:13:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.919 09:13:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.919 09:13:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.919 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.919 "name": "raid_bdev1", 00:14:26.919 "uuid": "169613be-9e28-43c0-8d76-a4ba885231af", 00:14:26.919 "strip_size_kb": 0, 00:14:26.919 "state": "online", 00:14:26.919 "raid_level": "raid1", 00:14:26.919 "superblock": false, 00:14:26.919 "num_base_bdevs": 4, 00:14:26.919 "num_base_bdevs_discovered": 3, 00:14:26.919 "num_base_bdevs_operational": 3, 00:14:26.919 "process": { 00:14:26.919 "type": "rebuild", 00:14:26.919 "target": "spare", 00:14:26.919 "progress": { 00:14:26.919 "blocks": 14336, 00:14:26.919 "percent": 21 00:14:26.919 } 00:14:26.919 }, 00:14:26.919 "base_bdevs_list": [ 00:14:26.919 { 00:14:26.919 "name": "spare", 00:14:26.919 "uuid": "41558adb-0b33-501a-81a8-472909366ff8", 00:14:26.919 "is_configured": true, 00:14:26.919 "data_offset": 0, 00:14:26.919 "data_size": 65536 00:14:26.919 }, 00:14:26.919 { 00:14:26.919 "name": null, 00:14:26.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.919 "is_configured": false, 00:14:26.919 "data_offset": 0, 00:14:26.919 "data_size": 65536 00:14:26.919 }, 00:14:26.919 { 00:14:26.919 "name": "BaseBdev3", 00:14:26.919 "uuid": "8fadac17-9e3c-5e2e-8032-ba4f984eca32", 00:14:26.919 "is_configured": true, 00:14:26.919 "data_offset": 0, 00:14:26.919 "data_size": 65536 00:14:26.919 }, 00:14:26.919 { 00:14:26.919 "name": "BaseBdev4", 00:14:26.919 "uuid": "ee9448a7-5daf-5434-bb5f-ea2ea735b055", 00:14:26.919 "is_configured": true, 00:14:26.919 "data_offset": 0, 00:14:26.919 "data_size": 65536 00:14:26.919 } 00:14:26.919 ] 00:14:26.919 }' 00:14:26.919 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.919 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:26.919 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.919 [2024-10-15 09:13:44.766004] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:26.919 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.919 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=508 00:14:26.919 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:26.919 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:26.919 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.919 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:26.919 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:26.919 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.919 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.919 09:13:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.919 09:13:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.919 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.177 09:13:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.177 131.00 IOPS, 393.00 MiB/s [2024-10-15T09:13:45.073Z] 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.177 "name": "raid_bdev1", 00:14:27.177 "uuid": "169613be-9e28-43c0-8d76-a4ba885231af", 00:14:27.177 "strip_size_kb": 0, 00:14:27.177 "state": "online", 00:14:27.177 "raid_level": "raid1", 00:14:27.177 "superblock": false, 00:14:27.177 "num_base_bdevs": 4, 00:14:27.177 "num_base_bdevs_discovered": 3, 00:14:27.177 "num_base_bdevs_operational": 3, 00:14:27.177 "process": { 00:14:27.177 "type": "rebuild", 00:14:27.177 "target": "spare", 00:14:27.177 "progress": { 00:14:27.177 "blocks": 16384, 00:14:27.177 "percent": 25 00:14:27.177 } 00:14:27.177 }, 00:14:27.177 "base_bdevs_list": [ 00:14:27.177 { 00:14:27.177 "name": "spare", 00:14:27.177 "uuid": "41558adb-0b33-501a-81a8-472909366ff8", 00:14:27.177 "is_configured": true, 00:14:27.177 "data_offset": 0, 00:14:27.177 "data_size": 65536 00:14:27.177 }, 00:14:27.177 { 00:14:27.177 "name": null, 00:14:27.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.177 "is_configured": false, 00:14:27.177 "data_offset": 0, 00:14:27.177 "data_size": 65536 00:14:27.177 }, 00:14:27.178 { 00:14:27.178 "name": "BaseBdev3", 00:14:27.178 "uuid": "8fadac17-9e3c-5e2e-8032-ba4f984eca32", 00:14:27.178 "is_configured": true, 00:14:27.178 "data_offset": 0, 00:14:27.178 "data_size": 65536 00:14:27.178 }, 00:14:27.178 { 00:14:27.178 "name": "BaseBdev4", 00:14:27.178 "uuid": "ee9448a7-5daf-5434-bb5f-ea2ea735b055", 00:14:27.178 "is_configured": true, 00:14:27.178 "data_offset": 0, 00:14:27.178 "data_size": 65536 00:14:27.178 } 00:14:27.178 ] 00:14:27.178 }' 00:14:27.178 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.178 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:27.178 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.178 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:27.178 09:13:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:27.436 [2024-10-15 09:13:45.089505] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:27.436 [2024-10-15 09:13:45.331104] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:27.693 [2024-10-15 09:13:45.551049] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:27.693 [2024-10-15 09:13:45.551783] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:28.210 115.20 IOPS, 345.60 MiB/s [2024-10-15T09:13:46.106Z] [2024-10-15 09:13:45.882744] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:28.210 09:13:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:28.210 09:13:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.210 09:13:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.210 09:13:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.210 09:13:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.210 09:13:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.210 09:13:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.210 09:13:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.210 09:13:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.210 09:13:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.210 09:13:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.210 09:13:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.210 "name": "raid_bdev1", 00:14:28.210 "uuid": "169613be-9e28-43c0-8d76-a4ba885231af", 00:14:28.210 "strip_size_kb": 0, 00:14:28.210 "state": "online", 00:14:28.210 "raid_level": "raid1", 00:14:28.210 "superblock": false, 00:14:28.210 "num_base_bdevs": 4, 00:14:28.210 "num_base_bdevs_discovered": 3, 00:14:28.210 "num_base_bdevs_operational": 3, 00:14:28.210 "process": { 00:14:28.210 "type": "rebuild", 00:14:28.210 "target": "spare", 00:14:28.210 "progress": { 00:14:28.210 "blocks": 32768, 00:14:28.210 "percent": 50 00:14:28.210 } 00:14:28.210 }, 00:14:28.210 "base_bdevs_list": [ 00:14:28.210 { 00:14:28.210 "name": "spare", 00:14:28.210 "uuid": "41558adb-0b33-501a-81a8-472909366ff8", 00:14:28.210 "is_configured": true, 00:14:28.210 "data_offset": 0, 00:14:28.210 "data_size": 65536 00:14:28.210 }, 00:14:28.210 { 00:14:28.210 "name": null, 00:14:28.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.210 "is_configured": false, 00:14:28.210 "data_offset": 0, 00:14:28.210 "data_size": 65536 00:14:28.210 }, 00:14:28.210 { 00:14:28.210 "name": "BaseBdev3", 00:14:28.210 "uuid": "8fadac17-9e3c-5e2e-8032-ba4f984eca32", 00:14:28.210 "is_configured": true, 00:14:28.210 "data_offset": 0, 00:14:28.210 "data_size": 65536 00:14:28.210 }, 00:14:28.210 { 00:14:28.210 "name": "BaseBdev4", 00:14:28.210 "uuid": "ee9448a7-5daf-5434-bb5f-ea2ea735b055", 00:14:28.210 "is_configured": true, 00:14:28.210 "data_offset": 0, 00:14:28.210 "data_size": 65536 00:14:28.210 } 00:14:28.210 ] 00:14:28.210 }' 00:14:28.210 09:13:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.210 09:13:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.210 09:13:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.210 09:13:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.210 09:13:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:28.468 [2024-10-15 09:13:46.121081] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:28.726 [2024-10-15 09:13:46.467887] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:29.242 102.83 IOPS, 308.50 MiB/s [2024-10-15T09:13:47.138Z] 09:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:29.242 09:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.242 09:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.242 09:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.242 09:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.242 09:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.242 09:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.242 09:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.242 09:13:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.242 09:13:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.242 09:13:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.242 09:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.242 "name": "raid_bdev1", 00:14:29.242 "uuid": "169613be-9e28-43c0-8d76-a4ba885231af", 00:14:29.242 "strip_size_kb": 0, 00:14:29.242 "state": "online", 00:14:29.242 "raid_level": "raid1", 00:14:29.243 "superblock": false, 00:14:29.243 "num_base_bdevs": 4, 00:14:29.243 "num_base_bdevs_discovered": 3, 00:14:29.243 "num_base_bdevs_operational": 3, 00:14:29.243 "process": { 00:14:29.243 "type": "rebuild", 00:14:29.243 "target": "spare", 00:14:29.243 "progress": { 00:14:29.243 "blocks": 51200, 00:14:29.243 "percent": 78 00:14:29.243 } 00:14:29.243 }, 00:14:29.243 "base_bdevs_list": [ 00:14:29.243 { 00:14:29.243 "name": "spare", 00:14:29.243 "uuid": "41558adb-0b33-501a-81a8-472909366ff8", 00:14:29.243 "is_configured": true, 00:14:29.243 "data_offset": 0, 00:14:29.243 "data_size": 65536 00:14:29.243 }, 00:14:29.243 { 00:14:29.243 "name": null, 00:14:29.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.243 "is_configured": false, 00:14:29.243 "data_offset": 0, 00:14:29.243 "data_size": 65536 00:14:29.243 }, 00:14:29.243 { 00:14:29.243 "name": "BaseBdev3", 00:14:29.243 "uuid": "8fadac17-9e3c-5e2e-8032-ba4f984eca32", 00:14:29.243 "is_configured": true, 00:14:29.243 "data_offset": 0, 00:14:29.243 "data_size": 65536 00:14:29.243 }, 00:14:29.243 { 00:14:29.243 "name": "BaseBdev4", 00:14:29.243 "uuid": "ee9448a7-5daf-5434-bb5f-ea2ea735b055", 00:14:29.243 "is_configured": true, 00:14:29.243 "data_offset": 0, 00:14:29.243 "data_size": 65536 00:14:29.243 } 00:14:29.243 ] 00:14:29.243 }' 00:14:29.243 09:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.501 [2024-10-15 09:13:47.152478] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:29.501 09:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.501 09:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.501 09:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.501 09:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:29.760 [2024-10-15 09:13:47.479730] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:30.274 93.14 IOPS, 279.43 MiB/s [2024-10-15T09:13:48.170Z] [2024-10-15 09:13:47.947837] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:30.274 [2024-10-15 09:13:48.054995] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:30.274 [2024-10-15 09:13:48.058985] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.532 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:30.532 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:30.532 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.532 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:30.532 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:30.532 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.532 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.532 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.532 09:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.532 09:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.532 09:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.532 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.532 "name": "raid_bdev1", 00:14:30.532 "uuid": "169613be-9e28-43c0-8d76-a4ba885231af", 00:14:30.532 "strip_size_kb": 0, 00:14:30.532 "state": "online", 00:14:30.532 "raid_level": "raid1", 00:14:30.532 "superblock": false, 00:14:30.532 "num_base_bdevs": 4, 00:14:30.532 "num_base_bdevs_discovered": 3, 00:14:30.532 "num_base_bdevs_operational": 3, 00:14:30.532 "base_bdevs_list": [ 00:14:30.532 { 00:14:30.532 "name": "spare", 00:14:30.532 "uuid": "41558adb-0b33-501a-81a8-472909366ff8", 00:14:30.532 "is_configured": true, 00:14:30.532 "data_offset": 0, 00:14:30.532 "data_size": 65536 00:14:30.532 }, 00:14:30.532 { 00:14:30.532 "name": null, 00:14:30.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.532 "is_configured": false, 00:14:30.532 "data_offset": 0, 00:14:30.532 "data_size": 65536 00:14:30.532 }, 00:14:30.532 { 00:14:30.532 "name": "BaseBdev3", 00:14:30.532 "uuid": "8fadac17-9e3c-5e2e-8032-ba4f984eca32", 00:14:30.532 "is_configured": true, 00:14:30.532 "data_offset": 0, 00:14:30.532 "data_size": 65536 00:14:30.532 }, 00:14:30.532 { 00:14:30.532 "name": "BaseBdev4", 00:14:30.532 "uuid": "ee9448a7-5daf-5434-bb5f-ea2ea735b055", 00:14:30.532 "is_configured": true, 00:14:30.532 "data_offset": 0, 00:14:30.532 "data_size": 65536 00:14:30.532 } 00:14:30.532 ] 00:14:30.532 }' 00:14:30.532 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.532 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:30.532 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.532 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:30.532 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:30.532 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:30.532 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.532 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:30.532 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:30.532 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.532 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.532 09:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.532 09:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.532 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.532 09:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.532 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.532 "name": "raid_bdev1", 00:14:30.532 "uuid": "169613be-9e28-43c0-8d76-a4ba885231af", 00:14:30.532 "strip_size_kb": 0, 00:14:30.532 "state": "online", 00:14:30.532 "raid_level": "raid1", 00:14:30.532 "superblock": false, 00:14:30.532 "num_base_bdevs": 4, 00:14:30.532 "num_base_bdevs_discovered": 3, 00:14:30.532 "num_base_bdevs_operational": 3, 00:14:30.532 "base_bdevs_list": [ 00:14:30.532 { 00:14:30.532 "name": "spare", 00:14:30.532 "uuid": "41558adb-0b33-501a-81a8-472909366ff8", 00:14:30.532 "is_configured": true, 00:14:30.532 "data_offset": 0, 00:14:30.532 "data_size": 65536 00:14:30.532 }, 00:14:30.532 { 00:14:30.532 "name": null, 00:14:30.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.532 "is_configured": false, 00:14:30.532 "data_offset": 0, 00:14:30.532 "data_size": 65536 00:14:30.532 }, 00:14:30.532 { 00:14:30.532 "name": "BaseBdev3", 00:14:30.532 "uuid": "8fadac17-9e3c-5e2e-8032-ba4f984eca32", 00:14:30.532 "is_configured": true, 00:14:30.532 "data_offset": 0, 00:14:30.532 "data_size": 65536 00:14:30.532 }, 00:14:30.532 { 00:14:30.532 "name": "BaseBdev4", 00:14:30.532 "uuid": "ee9448a7-5daf-5434-bb5f-ea2ea735b055", 00:14:30.532 "is_configured": true, 00:14:30.532 "data_offset": 0, 00:14:30.532 "data_size": 65536 00:14:30.532 } 00:14:30.532 ] 00:14:30.532 }' 00:14:30.532 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.532 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:30.532 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.880 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:30.880 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:30.880 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.880 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.880 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:30.880 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:30.880 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.880 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.880 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.880 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.880 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.880 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.880 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.880 09:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.880 09:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.880 09:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.880 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.880 "name": "raid_bdev1", 00:14:30.880 "uuid": "169613be-9e28-43c0-8d76-a4ba885231af", 00:14:30.880 "strip_size_kb": 0, 00:14:30.880 "state": "online", 00:14:30.880 "raid_level": "raid1", 00:14:30.880 "superblock": false, 00:14:30.880 "num_base_bdevs": 4, 00:14:30.880 "num_base_bdevs_discovered": 3, 00:14:30.880 "num_base_bdevs_operational": 3, 00:14:30.880 "base_bdevs_list": [ 00:14:30.880 { 00:14:30.880 "name": "spare", 00:14:30.880 "uuid": "41558adb-0b33-501a-81a8-472909366ff8", 00:14:30.880 "is_configured": true, 00:14:30.880 "data_offset": 0, 00:14:30.880 "data_size": 65536 00:14:30.880 }, 00:14:30.880 { 00:14:30.880 "name": null, 00:14:30.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.880 "is_configured": false, 00:14:30.880 "data_offset": 0, 00:14:30.880 "data_size": 65536 00:14:30.880 }, 00:14:30.880 { 00:14:30.880 "name": "BaseBdev3", 00:14:30.880 "uuid": "8fadac17-9e3c-5e2e-8032-ba4f984eca32", 00:14:30.880 "is_configured": true, 00:14:30.880 "data_offset": 0, 00:14:30.880 "data_size": 65536 00:14:30.880 }, 00:14:30.880 { 00:14:30.880 "name": "BaseBdev4", 00:14:30.880 "uuid": "ee9448a7-5daf-5434-bb5f-ea2ea735b055", 00:14:30.880 "is_configured": true, 00:14:30.880 "data_offset": 0, 00:14:30.880 "data_size": 65536 00:14:30.880 } 00:14:30.880 ] 00:14:30.880 }' 00:14:30.880 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.880 09:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.139 86.75 IOPS, 260.25 MiB/s [2024-10-15T09:13:49.035Z] 09:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:31.139 09:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.139 09:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.139 [2024-10-15 09:13:48.939131] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:31.139 [2024-10-15 09:13:48.939319] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:31.396 00:14:31.396 Latency(us) 00:14:31.396 [2024-10-15T09:13:49.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.396 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:31.396 raid_bdev1 : 8.23 85.08 255.25 0.00 0.00 16269.94 348.79 119968.08 00:14:31.396 [2024-10-15T09:13:49.292Z] =================================================================================================================== 00:14:31.396 [2024-10-15T09:13:49.292Z] Total : 85.08 255.25 0.00 0.00 16269.94 348.79 119968.08 00:14:31.396 [2024-10-15 09:13:49.067334] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.396 [2024-10-15 09:13:49.067491] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.396 [2024-10-15 09:13:49.067646] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:31.396 [2024-10-15 09:13:49.067744] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:31.396 { 00:14:31.396 "results": [ 00:14:31.396 { 00:14:31.396 "job": "raid_bdev1", 00:14:31.396 "core_mask": "0x1", 00:14:31.396 "workload": "randrw", 00:14:31.396 "percentage": 50, 00:14:31.396 "status": "finished", 00:14:31.396 "queue_depth": 2, 00:14:31.396 "io_size": 3145728, 00:14:31.396 "runtime": 8.227233, 00:14:31.396 "iops": 85.08328377232054, 00:14:31.396 "mibps": 255.2498513169616, 00:14:31.396 "io_failed": 0, 00:14:31.396 "io_timeout": 0, 00:14:31.396 "avg_latency_us": 16269.944414223331, 00:14:31.396 "min_latency_us": 348.7860262008734, 00:14:31.396 "max_latency_us": 119968.08384279476 00:14:31.396 } 00:14:31.396 ], 00:14:31.396 "core_count": 1 00:14:31.396 } 00:14:31.396 09:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.396 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.396 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:31.397 09:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.397 09:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.397 09:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.397 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:31.397 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:31.397 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:31.397 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:31.397 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:31.397 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:31.397 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:31.397 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:31.397 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:31.397 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:31.397 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:31.397 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:31.397 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:31.655 /dev/nbd0 00:14:31.655 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:31.655 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:31.655 09:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:31.655 09:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:14:31.655 09:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:31.655 09:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:31.655 09:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:31.655 09:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:14:31.655 09:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:31.655 09:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:31.655 09:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:31.655 1+0 records in 00:14:31.655 1+0 records out 00:14:31.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000534137 s, 7.7 MB/s 00:14:31.655 09:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.655 09:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:14:31.655 09:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.655 09:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:31.655 09:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:14:31.655 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:31.655 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:31.655 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:31.655 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:31.655 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:31.655 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:31.655 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:31.655 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:31.655 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:31.655 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:31.655 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:31.655 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:31.655 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:31.655 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:31.655 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:31.655 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:31.655 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:31.914 /dev/nbd1 00:14:31.914 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:31.914 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:31.914 09:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:31.914 09:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:14:31.914 09:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:31.914 09:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:31.914 09:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:31.914 09:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:14:31.914 09:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:31.914 09:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:31.914 09:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:31.914 1+0 records in 00:14:31.914 1+0 records out 00:14:31.914 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000621569 s, 6.6 MB/s 00:14:31.914 09:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.914 09:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:14:31.914 09:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.914 09:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:31.914 09:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:14:31.914 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:31.914 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:31.914 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:32.174 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:32.174 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:32.174 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:32.174 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:32.174 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:32.174 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:32.174 09:13:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:32.432 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:32.432 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:32.432 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:32.432 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:32.432 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:32.432 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:32.432 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:32.432 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:32.432 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:32.432 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:32.432 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:32.432 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:32.432 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:32.432 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:32.432 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:32.432 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:32.432 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:32.432 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:32.432 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:32.432 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:32.692 /dev/nbd1 00:14:32.692 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:32.692 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:32.692 09:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:32.692 09:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:14:32.692 09:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:32.692 09:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:32.692 09:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:32.692 09:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:14:32.692 09:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:32.692 09:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:32.692 09:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:32.692 1+0 records in 00:14:32.692 1+0 records out 00:14:32.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316508 s, 12.9 MB/s 00:14:32.692 09:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.692 09:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:14:32.692 09:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.692 09:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:32.692 09:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:14:32.692 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:32.692 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:32.692 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:32.950 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:32.950 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:32.950 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:32.950 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:32.950 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:32.950 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:32.950 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:33.209 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:33.209 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:33.209 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:33.209 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:33.209 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:33.209 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:33.209 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:33.209 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:33.209 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:33.209 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:33.209 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:33.209 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:33.209 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:33.209 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:33.209 09:13:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:33.467 09:13:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:33.467 09:13:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:33.467 09:13:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:33.467 09:13:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:33.467 09:13:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:33.467 09:13:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:33.467 09:13:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:33.467 09:13:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:33.467 09:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:33.467 09:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78952 00:14:33.467 09:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 78952 ']' 00:14:33.467 09:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 78952 00:14:33.467 09:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:14:33.467 09:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:33.467 09:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78952 00:14:33.467 09:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:33.467 09:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:33.467 09:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78952' 00:14:33.467 killing process with pid 78952 00:14:33.467 Received shutdown signal, test time was about 10.418209 seconds 00:14:33.467 00:14:33.467 Latency(us) 00:14:33.467 [2024-10-15T09:13:51.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.467 [2024-10-15T09:13:51.363Z] =================================================================================================================== 00:14:33.467 [2024-10-15T09:13:51.363Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:33.467 09:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 78952 00:14:33.467 [2024-10-15 09:13:51.225590] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:33.467 09:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 78952 00:14:34.099 [2024-10-15 09:13:51.750408] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:35.472 00:14:35.472 real 0m14.407s 00:14:35.472 user 0m18.184s 00:14:35.472 sys 0m2.102s 00:14:35.472 ************************************ 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.472 END TEST raid_rebuild_test_io 00:14:35.472 ************************************ 00:14:35.472 09:13:53 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:35.472 09:13:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:35.472 09:13:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:35.472 09:13:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:35.472 ************************************ 00:14:35.472 START TEST raid_rebuild_test_sb_io 00:14:35.472 ************************************ 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79372 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79372 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 79372 ']' 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:35.472 09:13:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.472 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:35.472 Zero copy mechanism will not be used. 00:14:35.472 [2024-10-15 09:13:53.329283] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:14:35.472 [2024-10-15 09:13:53.329445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79372 ] 00:14:35.729 [2024-10-15 09:13:53.506145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.988 [2024-10-15 09:13:53.645869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.245 [2024-10-15 09:13:53.888904] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.245 [2024-10-15 09:13:53.888967] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.503 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:36.503 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:14:36.503 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:36.504 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:36.504 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.504 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.504 BaseBdev1_malloc 00:14:36.504 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.504 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:36.504 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.504 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.504 [2024-10-15 09:13:54.302791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:36.504 [2024-10-15 09:13:54.302904] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.504 [2024-10-15 09:13:54.302939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:36.504 [2024-10-15 09:13:54.302954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.504 [2024-10-15 09:13:54.305638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.504 [2024-10-15 09:13:54.305706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:36.504 BaseBdev1 00:14:36.504 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.504 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:36.504 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:36.504 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.504 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.504 BaseBdev2_malloc 00:14:36.504 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.504 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:36.504 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.504 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.504 [2024-10-15 09:13:54.368284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:36.504 [2024-10-15 09:13:54.368478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.504 [2024-10-15 09:13:54.368527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:36.504 [2024-10-15 09:13:54.368576] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.504 [2024-10-15 09:13:54.371157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.504 [2024-10-15 09:13:54.371264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:36.504 BaseBdev2 00:14:36.504 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.504 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:36.504 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:36.504 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.504 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.763 BaseBdev3_malloc 00:14:36.763 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.763 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:36.763 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.763 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.763 [2024-10-15 09:13:54.440601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:36.763 [2024-10-15 09:13:54.440709] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.763 [2024-10-15 09:13:54.440739] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:36.763 [2024-10-15 09:13:54.440753] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.763 [2024-10-15 09:13:54.443308] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.763 [2024-10-15 09:13:54.443460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:36.763 BaseBdev3 00:14:36.763 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.763 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.764 BaseBdev4_malloc 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.764 [2024-10-15 09:13:54.499718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:36.764 [2024-10-15 09:13:54.499810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.764 [2024-10-15 09:13:54.499838] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:36.764 [2024-10-15 09:13:54.499850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.764 [2024-10-15 09:13:54.502323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.764 BaseBdev4 00:14:36.764 [2024-10-15 09:13:54.502471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.764 spare_malloc 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.764 spare_delay 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.764 [2024-10-15 09:13:54.576797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:36.764 [2024-10-15 09:13:54.576979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.764 [2024-10-15 09:13:54.577011] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:36.764 [2024-10-15 09:13:54.577024] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.764 [2024-10-15 09:13:54.579560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.764 [2024-10-15 09:13:54.579615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:36.764 spare 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.764 [2024-10-15 09:13:54.588879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:36.764 [2024-10-15 09:13:54.591025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:36.764 [2024-10-15 09:13:54.591114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:36.764 [2024-10-15 09:13:54.591179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:36.764 [2024-10-15 09:13:54.591407] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:36.764 [2024-10-15 09:13:54.591428] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:36.764 [2024-10-15 09:13:54.591750] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:36.764 [2024-10-15 09:13:54.591978] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:36.764 [2024-10-15 09:13:54.592093] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:36.764 [2024-10-15 09:13:54.592296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.764 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.764 "name": "raid_bdev1", 00:14:36.764 "uuid": "593409a4-2523-4666-8598-d6a0cf7dba35", 00:14:36.764 "strip_size_kb": 0, 00:14:36.764 "state": "online", 00:14:36.764 "raid_level": "raid1", 00:14:36.764 "superblock": true, 00:14:36.764 "num_base_bdevs": 4, 00:14:36.764 "num_base_bdevs_discovered": 4, 00:14:36.764 "num_base_bdevs_operational": 4, 00:14:36.764 "base_bdevs_list": [ 00:14:36.764 { 00:14:36.764 "name": "BaseBdev1", 00:14:36.764 "uuid": "0afdc14b-33b1-554e-b0d8-34d8f4890bdb", 00:14:36.764 "is_configured": true, 00:14:36.764 "data_offset": 2048, 00:14:36.764 "data_size": 63488 00:14:36.764 }, 00:14:36.764 { 00:14:36.764 "name": "BaseBdev2", 00:14:36.764 "uuid": "30488582-a498-5a5a-a126-439e7602ead5", 00:14:36.764 "is_configured": true, 00:14:36.764 "data_offset": 2048, 00:14:36.764 "data_size": 63488 00:14:36.764 }, 00:14:36.764 { 00:14:36.764 "name": "BaseBdev3", 00:14:36.764 "uuid": "6d6d36a5-03a1-52f5-a21b-de3e1fda5c85", 00:14:36.764 "is_configured": true, 00:14:36.764 "data_offset": 2048, 00:14:36.765 "data_size": 63488 00:14:36.765 }, 00:14:36.765 { 00:14:36.765 "name": "BaseBdev4", 00:14:36.765 "uuid": "508f813c-58d0-5922-8f26-1b2963551f15", 00:14:36.765 "is_configured": true, 00:14:36.765 "data_offset": 2048, 00:14:36.765 "data_size": 63488 00:14:36.765 } 00:14:36.765 ] 00:14:36.765 }' 00:14:36.765 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.765 09:13:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.332 [2024-10-15 09:13:55.048440] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.332 [2024-10-15 09:13:55.131904] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.332 "name": "raid_bdev1", 00:14:37.332 "uuid": "593409a4-2523-4666-8598-d6a0cf7dba35", 00:14:37.332 "strip_size_kb": 0, 00:14:37.332 "state": "online", 00:14:37.332 "raid_level": "raid1", 00:14:37.332 "superblock": true, 00:14:37.332 "num_base_bdevs": 4, 00:14:37.332 "num_base_bdevs_discovered": 3, 00:14:37.332 "num_base_bdevs_operational": 3, 00:14:37.332 "base_bdevs_list": [ 00:14:37.332 { 00:14:37.332 "name": null, 00:14:37.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.332 "is_configured": false, 00:14:37.332 "data_offset": 0, 00:14:37.332 "data_size": 63488 00:14:37.332 }, 00:14:37.332 { 00:14:37.332 "name": "BaseBdev2", 00:14:37.332 "uuid": "30488582-a498-5a5a-a126-439e7602ead5", 00:14:37.332 "is_configured": true, 00:14:37.332 "data_offset": 2048, 00:14:37.332 "data_size": 63488 00:14:37.332 }, 00:14:37.332 { 00:14:37.332 "name": "BaseBdev3", 00:14:37.332 "uuid": "6d6d36a5-03a1-52f5-a21b-de3e1fda5c85", 00:14:37.332 "is_configured": true, 00:14:37.332 "data_offset": 2048, 00:14:37.332 "data_size": 63488 00:14:37.332 }, 00:14:37.332 { 00:14:37.332 "name": "BaseBdev4", 00:14:37.332 "uuid": "508f813c-58d0-5922-8f26-1b2963551f15", 00:14:37.332 "is_configured": true, 00:14:37.332 "data_offset": 2048, 00:14:37.332 "data_size": 63488 00:14:37.332 } 00:14:37.332 ] 00:14:37.332 }' 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.332 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.589 [2024-10-15 09:13:55.249072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:37.589 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:37.589 Zero copy mechanism will not be used. 00:14:37.589 Running I/O for 60 seconds... 00:14:37.848 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:37.848 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.848 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.848 [2024-10-15 09:13:55.653125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:37.848 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.848 09:13:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:37.848 [2024-10-15 09:13:55.710684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:37.848 [2024-10-15 09:13:55.712914] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:38.107 [2024-10-15 09:13:55.839930] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:38.107 [2024-10-15 09:13:55.841624] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:38.365 [2024-10-15 09:13:56.059002] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:38.365 [2024-10-15 09:13:56.059482] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:38.623 152.00 IOPS, 456.00 MiB/s [2024-10-15T09:13:56.519Z] [2024-10-15 09:13:56.420124] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:38.623 [2024-10-15 09:13:56.428457] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:38.881 [2024-10-15 09:13:56.652270] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:38.881 [2024-10-15 09:13:56.652737] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:38.881 09:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:38.881 09:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.881 09:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:38.881 09:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:38.881 09:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.881 09:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.881 09:13:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.881 09:13:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.881 09:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.881 09:13:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.881 09:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.881 "name": "raid_bdev1", 00:14:38.881 "uuid": "593409a4-2523-4666-8598-d6a0cf7dba35", 00:14:38.881 "strip_size_kb": 0, 00:14:38.881 "state": "online", 00:14:38.881 "raid_level": "raid1", 00:14:38.881 "superblock": true, 00:14:38.881 "num_base_bdevs": 4, 00:14:38.881 "num_base_bdevs_discovered": 4, 00:14:38.881 "num_base_bdevs_operational": 4, 00:14:38.881 "process": { 00:14:38.881 "type": "rebuild", 00:14:38.881 "target": "spare", 00:14:38.881 "progress": { 00:14:38.881 "blocks": 10240, 00:14:38.881 "percent": 16 00:14:38.881 } 00:14:38.881 }, 00:14:38.881 "base_bdevs_list": [ 00:14:38.881 { 00:14:38.881 "name": "spare", 00:14:38.881 "uuid": "e7ee7157-dc1d-59bc-adb3-8f6cbad685a7", 00:14:38.881 "is_configured": true, 00:14:38.881 "data_offset": 2048, 00:14:38.881 "data_size": 63488 00:14:38.881 }, 00:14:38.881 { 00:14:38.881 "name": "BaseBdev2", 00:14:38.881 "uuid": "30488582-a498-5a5a-a126-439e7602ead5", 00:14:38.881 "is_configured": true, 00:14:38.881 "data_offset": 2048, 00:14:38.881 "data_size": 63488 00:14:38.881 }, 00:14:38.881 { 00:14:38.881 "name": "BaseBdev3", 00:14:38.881 "uuid": "6d6d36a5-03a1-52f5-a21b-de3e1fda5c85", 00:14:38.881 "is_configured": true, 00:14:38.881 "data_offset": 2048, 00:14:38.881 "data_size": 63488 00:14:38.881 }, 00:14:38.881 { 00:14:38.881 "name": "BaseBdev4", 00:14:38.881 "uuid": "508f813c-58d0-5922-8f26-1b2963551f15", 00:14:38.881 "is_configured": true, 00:14:38.881 "data_offset": 2048, 00:14:38.881 "data_size": 63488 00:14:38.881 } 00:14:38.881 ] 00:14:38.881 }' 00:14:38.881 09:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.139 09:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.139 09:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.139 09:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.139 09:13:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:39.139 09:13:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.139 09:13:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.139 [2024-10-15 09:13:56.860716] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:39.139 [2024-10-15 09:13:56.969727] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:39.139 [2024-10-15 09:13:56.991446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.139 [2024-10-15 09:13:56.991537] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:39.139 [2024-10-15 09:13:56.991557] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:39.139 [2024-10-15 09:13:57.013388] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:39.139 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.139 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:39.139 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.139 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.139 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.139 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.139 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.139 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.139 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.139 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.139 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.398 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.398 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.398 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.398 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.398 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.398 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.398 "name": "raid_bdev1", 00:14:39.398 "uuid": "593409a4-2523-4666-8598-d6a0cf7dba35", 00:14:39.398 "strip_size_kb": 0, 00:14:39.398 "state": "online", 00:14:39.398 "raid_level": "raid1", 00:14:39.398 "superblock": true, 00:14:39.398 "num_base_bdevs": 4, 00:14:39.398 "num_base_bdevs_discovered": 3, 00:14:39.398 "num_base_bdevs_operational": 3, 00:14:39.398 "base_bdevs_list": [ 00:14:39.398 { 00:14:39.398 "name": null, 00:14:39.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.398 "is_configured": false, 00:14:39.398 "data_offset": 0, 00:14:39.398 "data_size": 63488 00:14:39.398 }, 00:14:39.398 { 00:14:39.398 "name": "BaseBdev2", 00:14:39.398 "uuid": "30488582-a498-5a5a-a126-439e7602ead5", 00:14:39.398 "is_configured": true, 00:14:39.398 "data_offset": 2048, 00:14:39.398 "data_size": 63488 00:14:39.398 }, 00:14:39.398 { 00:14:39.398 "name": "BaseBdev3", 00:14:39.398 "uuid": "6d6d36a5-03a1-52f5-a21b-de3e1fda5c85", 00:14:39.398 "is_configured": true, 00:14:39.398 "data_offset": 2048, 00:14:39.398 "data_size": 63488 00:14:39.398 }, 00:14:39.398 { 00:14:39.398 "name": "BaseBdev4", 00:14:39.398 "uuid": "508f813c-58d0-5922-8f26-1b2963551f15", 00:14:39.398 "is_configured": true, 00:14:39.398 "data_offset": 2048, 00:14:39.398 "data_size": 63488 00:14:39.398 } 00:14:39.398 ] 00:14:39.398 }' 00:14:39.398 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.398 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.657 135.00 IOPS, 405.00 MiB/s [2024-10-15T09:13:57.553Z] 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:39.657 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.657 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:39.657 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:39.657 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.657 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.657 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.657 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.657 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.657 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.915 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.915 "name": "raid_bdev1", 00:14:39.915 "uuid": "593409a4-2523-4666-8598-d6a0cf7dba35", 00:14:39.915 "strip_size_kb": 0, 00:14:39.915 "state": "online", 00:14:39.915 "raid_level": "raid1", 00:14:39.915 "superblock": true, 00:14:39.915 "num_base_bdevs": 4, 00:14:39.915 "num_base_bdevs_discovered": 3, 00:14:39.915 "num_base_bdevs_operational": 3, 00:14:39.915 "base_bdevs_list": [ 00:14:39.915 { 00:14:39.915 "name": null, 00:14:39.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.915 "is_configured": false, 00:14:39.915 "data_offset": 0, 00:14:39.915 "data_size": 63488 00:14:39.915 }, 00:14:39.916 { 00:14:39.916 "name": "BaseBdev2", 00:14:39.916 "uuid": "30488582-a498-5a5a-a126-439e7602ead5", 00:14:39.916 "is_configured": true, 00:14:39.916 "data_offset": 2048, 00:14:39.916 "data_size": 63488 00:14:39.916 }, 00:14:39.916 { 00:14:39.916 "name": "BaseBdev3", 00:14:39.916 "uuid": "6d6d36a5-03a1-52f5-a21b-de3e1fda5c85", 00:14:39.916 "is_configured": true, 00:14:39.916 "data_offset": 2048, 00:14:39.916 "data_size": 63488 00:14:39.916 }, 00:14:39.916 { 00:14:39.916 "name": "BaseBdev4", 00:14:39.916 "uuid": "508f813c-58d0-5922-8f26-1b2963551f15", 00:14:39.916 "is_configured": true, 00:14:39.916 "data_offset": 2048, 00:14:39.916 "data_size": 63488 00:14:39.916 } 00:14:39.916 ] 00:14:39.916 }' 00:14:39.916 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.916 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:39.916 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.916 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:39.916 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:39.916 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.916 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.916 [2024-10-15 09:13:57.655031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:39.916 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.916 09:13:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:39.916 [2024-10-15 09:13:57.735626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:39.916 [2024-10-15 09:13:57.738129] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:40.175 [2024-10-15 09:13:57.865035] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:40.433 [2024-10-15 09:13:58.097889] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:40.433 [2024-10-15 09:13:58.098273] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:40.691 141.33 IOPS, 424.00 MiB/s [2024-10-15T09:13:58.587Z] [2024-10-15 09:13:58.436148] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:40.691 [2024-10-15 09:13:58.558890] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:40.950 09:13:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:40.950 09:13:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.950 09:13:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:40.950 09:13:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:40.950 09:13:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.950 09:13:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.950 09:13:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.950 09:13:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.950 09:13:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.950 09:13:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.950 09:13:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.950 "name": "raid_bdev1", 00:14:40.950 "uuid": "593409a4-2523-4666-8598-d6a0cf7dba35", 00:14:40.950 "strip_size_kb": 0, 00:14:40.950 "state": "online", 00:14:40.950 "raid_level": "raid1", 00:14:40.950 "superblock": true, 00:14:40.950 "num_base_bdevs": 4, 00:14:40.950 "num_base_bdevs_discovered": 4, 00:14:40.950 "num_base_bdevs_operational": 4, 00:14:40.950 "process": { 00:14:40.950 "type": "rebuild", 00:14:40.950 "target": "spare", 00:14:40.950 "progress": { 00:14:40.950 "blocks": 12288, 00:14:40.950 "percent": 19 00:14:40.950 } 00:14:40.950 }, 00:14:40.950 "base_bdevs_list": [ 00:14:40.950 { 00:14:40.950 "name": "spare", 00:14:40.950 "uuid": "e7ee7157-dc1d-59bc-adb3-8f6cbad685a7", 00:14:40.950 "is_configured": true, 00:14:40.950 "data_offset": 2048, 00:14:40.950 "data_size": 63488 00:14:40.950 }, 00:14:40.950 { 00:14:40.950 "name": "BaseBdev2", 00:14:40.950 "uuid": "30488582-a498-5a5a-a126-439e7602ead5", 00:14:40.950 "is_configured": true, 00:14:40.950 "data_offset": 2048, 00:14:40.950 "data_size": 63488 00:14:40.950 }, 00:14:40.950 { 00:14:40.950 "name": "BaseBdev3", 00:14:40.950 "uuid": "6d6d36a5-03a1-52f5-a21b-de3e1fda5c85", 00:14:40.950 "is_configured": true, 00:14:40.950 "data_offset": 2048, 00:14:40.950 "data_size": 63488 00:14:40.950 }, 00:14:40.950 { 00:14:40.950 "name": "BaseBdev4", 00:14:40.950 "uuid": "508f813c-58d0-5922-8f26-1b2963551f15", 00:14:40.950 "is_configured": true, 00:14:40.950 "data_offset": 2048, 00:14:40.950 "data_size": 63488 00:14:40.950 } 00:14:40.950 ] 00:14:40.950 }' 00:14:40.950 09:13:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.950 09:13:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:40.950 09:13:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.950 09:13:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.950 09:13:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:40.950 09:13:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:40.950 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:40.950 09:13:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:40.950 09:13:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:40.950 09:13:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:40.950 09:13:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:40.950 09:13:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.950 09:13:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.208 [2024-10-15 09:13:58.850942] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:41.208 [2024-10-15 09:13:58.923283] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:41.467 [2024-10-15 09:13:59.134188] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:41.467 [2024-10-15 09:13:59.134359] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:41.467 [2024-10-15 09:13:59.144553] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:41.467 [2024-10-15 09:13:59.145289] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:41.467 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.467 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:41.467 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:41.467 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.467 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.467 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.467 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.467 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.467 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.467 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.467 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.467 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.467 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.467 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.467 "name": "raid_bdev1", 00:14:41.467 "uuid": "593409a4-2523-4666-8598-d6a0cf7dba35", 00:14:41.467 "strip_size_kb": 0, 00:14:41.467 "state": "online", 00:14:41.467 "raid_level": "raid1", 00:14:41.467 "superblock": true, 00:14:41.467 "num_base_bdevs": 4, 00:14:41.467 "num_base_bdevs_discovered": 3, 00:14:41.467 "num_base_bdevs_operational": 3, 00:14:41.467 "process": { 00:14:41.467 "type": "rebuild", 00:14:41.467 "target": "spare", 00:14:41.467 "progress": { 00:14:41.467 "blocks": 16384, 00:14:41.467 "percent": 25 00:14:41.467 } 00:14:41.467 }, 00:14:41.467 "base_bdevs_list": [ 00:14:41.467 { 00:14:41.467 "name": "spare", 00:14:41.467 "uuid": "e7ee7157-dc1d-59bc-adb3-8f6cbad685a7", 00:14:41.467 "is_configured": true, 00:14:41.467 "data_offset": 2048, 00:14:41.467 "data_size": 63488 00:14:41.467 }, 00:14:41.467 { 00:14:41.467 "name": null, 00:14:41.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.467 "is_configured": false, 00:14:41.467 "data_offset": 0, 00:14:41.467 "data_size": 63488 00:14:41.467 }, 00:14:41.467 { 00:14:41.467 "name": "BaseBdev3", 00:14:41.467 "uuid": "6d6d36a5-03a1-52f5-a21b-de3e1fda5c85", 00:14:41.467 "is_configured": true, 00:14:41.467 "data_offset": 2048, 00:14:41.467 "data_size": 63488 00:14:41.467 }, 00:14:41.467 { 00:14:41.467 "name": "BaseBdev4", 00:14:41.467 "uuid": "508f813c-58d0-5922-8f26-1b2963551f15", 00:14:41.467 "is_configured": true, 00:14:41.467 "data_offset": 2048, 00:14:41.467 "data_size": 63488 00:14:41.467 } 00:14:41.467 ] 00:14:41.467 }' 00:14:41.467 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.467 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:41.467 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.467 117.25 IOPS, 351.75 MiB/s [2024-10-15T09:13:59.363Z] 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:41.467 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=523 00:14:41.467 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:41.467 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.467 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.467 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.467 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.467 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.467 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.467 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.467 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.467 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.467 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.467 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.467 "name": "raid_bdev1", 00:14:41.467 "uuid": "593409a4-2523-4666-8598-d6a0cf7dba35", 00:14:41.467 "strip_size_kb": 0, 00:14:41.467 "state": "online", 00:14:41.467 "raid_level": "raid1", 00:14:41.467 "superblock": true, 00:14:41.467 "num_base_bdevs": 4, 00:14:41.467 "num_base_bdevs_discovered": 3, 00:14:41.467 "num_base_bdevs_operational": 3, 00:14:41.467 "process": { 00:14:41.467 "type": "rebuild", 00:14:41.467 "target": "spare", 00:14:41.467 "progress": { 00:14:41.467 "blocks": 16384, 00:14:41.467 "percent": 25 00:14:41.467 } 00:14:41.467 }, 00:14:41.467 "base_bdevs_list": [ 00:14:41.467 { 00:14:41.467 "name": "spare", 00:14:41.467 "uuid": "e7ee7157-dc1d-59bc-adb3-8f6cbad685a7", 00:14:41.467 "is_configured": true, 00:14:41.467 "data_offset": 2048, 00:14:41.467 "data_size": 63488 00:14:41.467 }, 00:14:41.467 { 00:14:41.467 "name": null, 00:14:41.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.467 "is_configured": false, 00:14:41.467 "data_offset": 0, 00:14:41.467 "data_size": 63488 00:14:41.467 }, 00:14:41.467 { 00:14:41.467 "name": "BaseBdev3", 00:14:41.467 "uuid": "6d6d36a5-03a1-52f5-a21b-de3e1fda5c85", 00:14:41.467 "is_configured": true, 00:14:41.467 "data_offset": 2048, 00:14:41.467 "data_size": 63488 00:14:41.467 }, 00:14:41.467 { 00:14:41.467 "name": "BaseBdev4", 00:14:41.467 "uuid": "508f813c-58d0-5922-8f26-1b2963551f15", 00:14:41.467 "is_configured": true, 00:14:41.467 "data_offset": 2048, 00:14:41.467 "data_size": 63488 00:14:41.467 } 00:14:41.467 ] 00:14:41.467 }' 00:14:41.467 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.725 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:41.725 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.725 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:41.725 09:13:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:42.659 104.20 IOPS, 312.60 MiB/s [2024-10-15T09:14:00.555Z] 09:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:42.659 09:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.659 09:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.659 09:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.659 09:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.659 09:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.659 09:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.659 09:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.659 09:14:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.659 09:14:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.659 09:14:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.659 09:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.659 "name": "raid_bdev1", 00:14:42.659 "uuid": "593409a4-2523-4666-8598-d6a0cf7dba35", 00:14:42.659 "strip_size_kb": 0, 00:14:42.659 "state": "online", 00:14:42.659 "raid_level": "raid1", 00:14:42.659 "superblock": true, 00:14:42.659 "num_base_bdevs": 4, 00:14:42.659 "num_base_bdevs_discovered": 3, 00:14:42.659 "num_base_bdevs_operational": 3, 00:14:42.659 "process": { 00:14:42.659 "type": "rebuild", 00:14:42.659 "target": "spare", 00:14:42.659 "progress": { 00:14:42.659 "blocks": 36864, 00:14:42.659 "percent": 58 00:14:42.659 } 00:14:42.659 }, 00:14:42.659 "base_bdevs_list": [ 00:14:42.659 { 00:14:42.659 "name": "spare", 00:14:42.659 "uuid": "e7ee7157-dc1d-59bc-adb3-8f6cbad685a7", 00:14:42.659 "is_configured": true, 00:14:42.659 "data_offset": 2048, 00:14:42.659 "data_size": 63488 00:14:42.659 }, 00:14:42.659 { 00:14:42.659 "name": null, 00:14:42.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.659 "is_configured": false, 00:14:42.659 "data_offset": 0, 00:14:42.659 "data_size": 63488 00:14:42.659 }, 00:14:42.659 { 00:14:42.659 "name": "BaseBdev3", 00:14:42.659 "uuid": "6d6d36a5-03a1-52f5-a21b-de3e1fda5c85", 00:14:42.659 "is_configured": true, 00:14:42.659 "data_offset": 2048, 00:14:42.659 "data_size": 63488 00:14:42.659 }, 00:14:42.659 { 00:14:42.660 "name": "BaseBdev4", 00:14:42.660 "uuid": "508f813c-58d0-5922-8f26-1b2963551f15", 00:14:42.660 "is_configured": true, 00:14:42.660 "data_offset": 2048, 00:14:42.660 "data_size": 63488 00:14:42.660 } 00:14:42.660 ] 00:14:42.660 }' 00:14:42.660 09:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.660 [2024-10-15 09:14:00.491150] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:42.660 [2024-10-15 09:14:00.491953] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:42.660 09:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.660 09:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.918 09:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.918 09:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:42.918 [2024-10-15 09:14:00.712838] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:43.263 [2024-10-15 09:14:01.081547] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:43.523 92.67 IOPS, 278.00 MiB/s [2024-10-15T09:14:01.419Z] [2024-10-15 09:14:01.307740] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:43.782 09:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:43.782 09:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:43.782 09:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.782 09:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:43.782 09:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:43.782 09:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.782 09:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.782 09:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.782 09:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.782 09:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.782 09:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.782 [2024-10-15 09:14:01.631463] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:43.782 09:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.782 "name": "raid_bdev1", 00:14:43.782 "uuid": "593409a4-2523-4666-8598-d6a0cf7dba35", 00:14:43.782 "strip_size_kb": 0, 00:14:43.782 "state": "online", 00:14:43.782 "raid_level": "raid1", 00:14:43.782 "superblock": true, 00:14:43.782 "num_base_bdevs": 4, 00:14:43.782 "num_base_bdevs_discovered": 3, 00:14:43.782 "num_base_bdevs_operational": 3, 00:14:43.782 "process": { 00:14:43.782 "type": "rebuild", 00:14:43.782 "target": "spare", 00:14:43.782 "progress": { 00:14:43.782 "blocks": 49152, 00:14:43.782 "percent": 77 00:14:43.782 } 00:14:43.782 }, 00:14:43.782 "base_bdevs_list": [ 00:14:43.782 { 00:14:43.782 "name": "spare", 00:14:43.782 "uuid": "e7ee7157-dc1d-59bc-adb3-8f6cbad685a7", 00:14:43.782 "is_configured": true, 00:14:43.782 "data_offset": 2048, 00:14:43.782 "data_size": 63488 00:14:43.782 }, 00:14:43.782 { 00:14:43.782 "name": null, 00:14:43.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.782 "is_configured": false, 00:14:43.782 "data_offset": 0, 00:14:43.782 "data_size": 63488 00:14:43.782 }, 00:14:43.782 { 00:14:43.782 "name": "BaseBdev3", 00:14:43.782 "uuid": "6d6d36a5-03a1-52f5-a21b-de3e1fda5c85", 00:14:43.782 "is_configured": true, 00:14:43.782 "data_offset": 2048, 00:14:43.782 "data_size": 63488 00:14:43.782 }, 00:14:43.782 { 00:14:43.782 "name": "BaseBdev4", 00:14:43.782 "uuid": "508f813c-58d0-5922-8f26-1b2963551f15", 00:14:43.782 "is_configured": true, 00:14:43.782 "data_offset": 2048, 00:14:43.782 "data_size": 63488 00:14:43.782 } 00:14:43.782 ] 00:14:43.782 }' 00:14:43.782 09:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.040 09:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:44.040 09:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.040 09:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:44.040 09:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:44.040 [2024-10-15 09:14:01.842166] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:44.040 [2024-10-15 09:14:01.842673] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:44.298 [2024-10-15 09:14:02.189212] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:44.556 84.86 IOPS, 254.57 MiB/s [2024-10-15T09:14:02.452Z] [2024-10-15 09:14:02.428966] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:45.123 09:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:45.123 09:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:45.123 09:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.123 09:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:45.123 09:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:45.123 09:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.123 09:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.123 09:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.123 09:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.123 09:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.123 [2024-10-15 09:14:02.767599] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:45.123 09:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.123 09:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.123 "name": "raid_bdev1", 00:14:45.123 "uuid": "593409a4-2523-4666-8598-d6a0cf7dba35", 00:14:45.123 "strip_size_kb": 0, 00:14:45.123 "state": "online", 00:14:45.123 "raid_level": "raid1", 00:14:45.123 "superblock": true, 00:14:45.123 "num_base_bdevs": 4, 00:14:45.123 "num_base_bdevs_discovered": 3, 00:14:45.123 "num_base_bdevs_operational": 3, 00:14:45.123 "process": { 00:14:45.123 "type": "rebuild", 00:14:45.123 "target": "spare", 00:14:45.123 "progress": { 00:14:45.123 "blocks": 63488, 00:14:45.123 "percent": 100 00:14:45.123 } 00:14:45.123 }, 00:14:45.123 "base_bdevs_list": [ 00:14:45.123 { 00:14:45.123 "name": "spare", 00:14:45.123 "uuid": "e7ee7157-dc1d-59bc-adb3-8f6cbad685a7", 00:14:45.123 "is_configured": true, 00:14:45.123 "data_offset": 2048, 00:14:45.123 "data_size": 63488 00:14:45.123 }, 00:14:45.123 { 00:14:45.123 "name": null, 00:14:45.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.123 "is_configured": false, 00:14:45.123 "data_offset": 0, 00:14:45.123 "data_size": 63488 00:14:45.123 }, 00:14:45.123 { 00:14:45.123 "name": "BaseBdev3", 00:14:45.123 "uuid": "6d6d36a5-03a1-52f5-a21b-de3e1fda5c85", 00:14:45.123 "is_configured": true, 00:14:45.123 "data_offset": 2048, 00:14:45.123 "data_size": 63488 00:14:45.123 }, 00:14:45.123 { 00:14:45.123 "name": "BaseBdev4", 00:14:45.123 "uuid": "508f813c-58d0-5922-8f26-1b2963551f15", 00:14:45.123 "is_configured": true, 00:14:45.123 "data_offset": 2048, 00:14:45.123 "data_size": 63488 00:14:45.123 } 00:14:45.123 ] 00:14:45.123 }' 00:14:45.123 09:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.123 09:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:45.123 09:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.123 [2024-10-15 09:14:02.874598] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:45.123 [2024-10-15 09:14:02.878147] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.123 09:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:45.123 09:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:46.317 78.75 IOPS, 236.25 MiB/s [2024-10-15T09:14:04.213Z] 09:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:46.317 09:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:46.317 09:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.317 09:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:46.317 09:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:46.317 09:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.317 09:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.317 09:14:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.317 09:14:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.317 09:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.317 09:14:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.317 09:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.317 "name": "raid_bdev1", 00:14:46.317 "uuid": "593409a4-2523-4666-8598-d6a0cf7dba35", 00:14:46.317 "strip_size_kb": 0, 00:14:46.317 "state": "online", 00:14:46.317 "raid_level": "raid1", 00:14:46.317 "superblock": true, 00:14:46.317 "num_base_bdevs": 4, 00:14:46.317 "num_base_bdevs_discovered": 3, 00:14:46.317 "num_base_bdevs_operational": 3, 00:14:46.317 "base_bdevs_list": [ 00:14:46.317 { 00:14:46.317 "name": "spare", 00:14:46.317 "uuid": "e7ee7157-dc1d-59bc-adb3-8f6cbad685a7", 00:14:46.317 "is_configured": true, 00:14:46.317 "data_offset": 2048, 00:14:46.317 "data_size": 63488 00:14:46.317 }, 00:14:46.317 { 00:14:46.317 "name": null, 00:14:46.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.317 "is_configured": false, 00:14:46.317 "data_offset": 0, 00:14:46.317 "data_size": 63488 00:14:46.317 }, 00:14:46.317 { 00:14:46.317 "name": "BaseBdev3", 00:14:46.317 "uuid": "6d6d36a5-03a1-52f5-a21b-de3e1fda5c85", 00:14:46.317 "is_configured": true, 00:14:46.317 "data_offset": 2048, 00:14:46.317 "data_size": 63488 00:14:46.317 }, 00:14:46.317 { 00:14:46.317 "name": "BaseBdev4", 00:14:46.317 "uuid": "508f813c-58d0-5922-8f26-1b2963551f15", 00:14:46.317 "is_configured": true, 00:14:46.317 "data_offset": 2048, 00:14:46.317 "data_size": 63488 00:14:46.317 } 00:14:46.317 ] 00:14:46.317 }' 00:14:46.317 09:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.317 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:46.318 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.318 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:46.318 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:46.318 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:46.318 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.318 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:46.318 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:46.318 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.318 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.318 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.318 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.318 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.318 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.318 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.318 "name": "raid_bdev1", 00:14:46.318 "uuid": "593409a4-2523-4666-8598-d6a0cf7dba35", 00:14:46.318 "strip_size_kb": 0, 00:14:46.318 "state": "online", 00:14:46.318 "raid_level": "raid1", 00:14:46.318 "superblock": true, 00:14:46.318 "num_base_bdevs": 4, 00:14:46.318 "num_base_bdevs_discovered": 3, 00:14:46.318 "num_base_bdevs_operational": 3, 00:14:46.318 "base_bdevs_list": [ 00:14:46.318 { 00:14:46.318 "name": "spare", 00:14:46.318 "uuid": "e7ee7157-dc1d-59bc-adb3-8f6cbad685a7", 00:14:46.318 "is_configured": true, 00:14:46.318 "data_offset": 2048, 00:14:46.318 "data_size": 63488 00:14:46.318 }, 00:14:46.318 { 00:14:46.318 "name": null, 00:14:46.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.318 "is_configured": false, 00:14:46.318 "data_offset": 0, 00:14:46.318 "data_size": 63488 00:14:46.318 }, 00:14:46.318 { 00:14:46.318 "name": "BaseBdev3", 00:14:46.318 "uuid": "6d6d36a5-03a1-52f5-a21b-de3e1fda5c85", 00:14:46.318 "is_configured": true, 00:14:46.318 "data_offset": 2048, 00:14:46.318 "data_size": 63488 00:14:46.318 }, 00:14:46.318 { 00:14:46.318 "name": "BaseBdev4", 00:14:46.318 "uuid": "508f813c-58d0-5922-8f26-1b2963551f15", 00:14:46.318 "is_configured": true, 00:14:46.318 "data_offset": 2048, 00:14:46.318 "data_size": 63488 00:14:46.318 } 00:14:46.318 ] 00:14:46.318 }' 00:14:46.318 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.318 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:46.318 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.318 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:46.318 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:46.318 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.318 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.318 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.318 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.318 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.318 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.318 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.318 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.318 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.318 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.318 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.318 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.318 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.318 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.576 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.576 "name": "raid_bdev1", 00:14:46.576 "uuid": "593409a4-2523-4666-8598-d6a0cf7dba35", 00:14:46.576 "strip_size_kb": 0, 00:14:46.576 "state": "online", 00:14:46.576 "raid_level": "raid1", 00:14:46.576 "superblock": true, 00:14:46.576 "num_base_bdevs": 4, 00:14:46.577 "num_base_bdevs_discovered": 3, 00:14:46.577 "num_base_bdevs_operational": 3, 00:14:46.577 "base_bdevs_list": [ 00:14:46.577 { 00:14:46.577 "name": "spare", 00:14:46.577 "uuid": "e7ee7157-dc1d-59bc-adb3-8f6cbad685a7", 00:14:46.577 "is_configured": true, 00:14:46.577 "data_offset": 2048, 00:14:46.577 "data_size": 63488 00:14:46.577 }, 00:14:46.577 { 00:14:46.577 "name": null, 00:14:46.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.577 "is_configured": false, 00:14:46.577 "data_offset": 0, 00:14:46.577 "data_size": 63488 00:14:46.577 }, 00:14:46.577 { 00:14:46.577 "name": "BaseBdev3", 00:14:46.577 "uuid": "6d6d36a5-03a1-52f5-a21b-de3e1fda5c85", 00:14:46.577 "is_configured": true, 00:14:46.577 "data_offset": 2048, 00:14:46.577 "data_size": 63488 00:14:46.577 }, 00:14:46.577 { 00:14:46.577 "name": "BaseBdev4", 00:14:46.577 "uuid": "508f813c-58d0-5922-8f26-1b2963551f15", 00:14:46.577 "is_configured": true, 00:14:46.577 "data_offset": 2048, 00:14:46.577 "data_size": 63488 00:14:46.577 } 00:14:46.577 ] 00:14:46.577 }' 00:14:46.577 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.577 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.835 74.33 IOPS, 223.00 MiB/s [2024-10-15T09:14:04.731Z] 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:46.835 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.835 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.835 [2024-10-15 09:14:04.699158] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:46.835 [2024-10-15 09:14:04.699217] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:47.094 00:14:47.094 Latency(us) 00:14:47.094 [2024-10-15T09:14:04.990Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.094 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:47.094 raid_bdev1 : 9.54 72.14 216.41 0.00 0.00 19538.18 327.32 121799.66 00:14:47.094 [2024-10-15T09:14:04.990Z] =================================================================================================================== 00:14:47.094 [2024-10-15T09:14:04.990Z] Total : 72.14 216.41 0.00 0.00 19538.18 327.32 121799.66 00:14:47.094 { 00:14:47.094 "results": [ 00:14:47.094 { 00:14:47.094 "job": "raid_bdev1", 00:14:47.094 "core_mask": "0x1", 00:14:47.094 "workload": "randrw", 00:14:47.094 "percentage": 50, 00:14:47.094 "status": "finished", 00:14:47.094 "queue_depth": 2, 00:14:47.094 "io_size": 3145728, 00:14:47.094 "runtime": 9.537623, 00:14:47.094 "iops": 72.1353737718507, 00:14:47.094 "mibps": 216.40612131555207, 00:14:47.094 "io_failed": 0, 00:14:47.094 "io_timeout": 0, 00:14:47.094 "avg_latency_us": 19538.18023763583, 00:14:47.094 "min_latency_us": 327.32227074235806, 00:14:47.094 "max_latency_us": 121799.6576419214 00:14:47.094 } 00:14:47.094 ], 00:14:47.094 "core_count": 1 00:14:47.094 } 00:14:47.094 [2024-10-15 09:14:04.796825] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.094 [2024-10-15 09:14:04.796904] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:47.094 [2024-10-15 09:14:04.797017] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:47.094 [2024-10-15 09:14:04.797032] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:47.094 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.094 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:47.094 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.094 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.094 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.094 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.094 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:47.094 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:47.094 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:47.094 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:47.094 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:47.094 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:47.094 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:47.094 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:47.094 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:47.094 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:47.094 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:47.094 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:47.094 09:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:47.353 /dev/nbd0 00:14:47.353 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:47.353 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:47.353 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:47.353 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:14:47.353 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:47.353 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:47.353 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:47.353 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:14:47.353 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:47.353 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:47.353 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:47.353 1+0 records in 00:14:47.353 1+0 records out 00:14:47.353 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000670005 s, 6.1 MB/s 00:14:47.353 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.353 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:14:47.353 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.353 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:47.353 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:14:47.353 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:47.353 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:47.353 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:47.353 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:47.353 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:47.353 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:47.353 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:47.353 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:47.353 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:47.353 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:47.353 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:47.353 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:47.353 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:47.353 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:47.353 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:47.353 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:47.353 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:47.612 /dev/nbd1 00:14:47.612 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:47.612 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:47.612 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:47.612 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:14:47.612 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:47.612 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:47.612 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:47.612 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:14:47.612 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:47.612 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:47.612 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:47.612 1+0 records in 00:14:47.612 1+0 records out 00:14:47.612 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000507609 s, 8.1 MB/s 00:14:47.612 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.612 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:14:47.612 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.612 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:47.612 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:14:47.612 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:47.612 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:47.612 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:47.870 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:47.870 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:47.870 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:47.870 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:47.870 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:47.870 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:47.870 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:48.129 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:48.129 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:48.129 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:48.129 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:48.129 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:48.129 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:48.129 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:48.129 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:48.129 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:48.129 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:48.129 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:48.129 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:48.129 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:48.129 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:48.129 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:48.129 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:48.129 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:48.129 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:48.129 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:48.129 09:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:48.387 /dev/nbd1 00:14:48.387 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:48.387 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:48.387 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:48.387 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:14:48.387 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:48.387 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:48.387 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:48.387 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:14:48.387 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:48.387 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:48.387 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:48.387 1+0 records in 00:14:48.387 1+0 records out 00:14:48.387 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439139 s, 9.3 MB/s 00:14:48.387 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:48.387 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:14:48.387 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:48.387 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:48.387 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:14:48.387 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:48.387 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:48.387 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:48.646 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:48.646 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:48.646 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:48.646 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:48.646 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:48.646 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:48.646 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:48.905 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:48.905 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:48.905 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:48.905 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:48.905 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:48.905 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:48.905 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:48.905 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:48.905 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:48.905 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:48.905 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:48.905 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:48.905 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:48.905 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:48.905 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:49.166 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:49.166 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:49.166 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:49.166 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:49.166 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:49.166 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:49.166 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:49.166 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:49.166 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:49.166 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:49.166 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.166 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.166 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.166 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:49.166 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.166 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.166 [2024-10-15 09:14:06.885583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:49.166 [2024-10-15 09:14:06.885843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.166 [2024-10-15 09:14:06.885925] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:49.166 [2024-10-15 09:14:06.885976] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.166 [2024-10-15 09:14:06.888788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.166 [2024-10-15 09:14:06.888916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:49.166 [2024-10-15 09:14:06.889035] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:49.166 [2024-10-15 09:14:06.889124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:49.166 [2024-10-15 09:14:06.889369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:49.166 [2024-10-15 09:14:06.889563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:49.166 spare 00:14:49.166 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.166 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:49.166 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.166 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.166 [2024-10-15 09:14:06.989601] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:49.166 [2024-10-15 09:14:06.989679] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:49.166 [2024-10-15 09:14:06.990133] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:14:49.166 [2024-10-15 09:14:06.990420] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:49.166 [2024-10-15 09:14:06.990436] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:49.166 [2024-10-15 09:14:06.990666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.166 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.166 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:49.166 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.166 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.166 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.166 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.166 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:49.166 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.166 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.166 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.166 09:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.166 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.166 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.166 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.166 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.166 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.166 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.166 "name": "raid_bdev1", 00:14:49.166 "uuid": "593409a4-2523-4666-8598-d6a0cf7dba35", 00:14:49.166 "strip_size_kb": 0, 00:14:49.166 "state": "online", 00:14:49.166 "raid_level": "raid1", 00:14:49.166 "superblock": true, 00:14:49.166 "num_base_bdevs": 4, 00:14:49.166 "num_base_bdevs_discovered": 3, 00:14:49.166 "num_base_bdevs_operational": 3, 00:14:49.166 "base_bdevs_list": [ 00:14:49.166 { 00:14:49.166 "name": "spare", 00:14:49.166 "uuid": "e7ee7157-dc1d-59bc-adb3-8f6cbad685a7", 00:14:49.166 "is_configured": true, 00:14:49.166 "data_offset": 2048, 00:14:49.166 "data_size": 63488 00:14:49.166 }, 00:14:49.166 { 00:14:49.166 "name": null, 00:14:49.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.166 "is_configured": false, 00:14:49.166 "data_offset": 2048, 00:14:49.166 "data_size": 63488 00:14:49.166 }, 00:14:49.166 { 00:14:49.166 "name": "BaseBdev3", 00:14:49.166 "uuid": "6d6d36a5-03a1-52f5-a21b-de3e1fda5c85", 00:14:49.166 "is_configured": true, 00:14:49.166 "data_offset": 2048, 00:14:49.166 "data_size": 63488 00:14:49.166 }, 00:14:49.166 { 00:14:49.166 "name": "BaseBdev4", 00:14:49.166 "uuid": "508f813c-58d0-5922-8f26-1b2963551f15", 00:14:49.166 "is_configured": true, 00:14:49.166 "data_offset": 2048, 00:14:49.166 "data_size": 63488 00:14:49.166 } 00:14:49.166 ] 00:14:49.166 }' 00:14:49.166 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.166 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.739 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:49.739 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.739 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:49.739 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:49.739 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.740 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.740 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.740 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.740 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.740 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.740 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.740 "name": "raid_bdev1", 00:14:49.740 "uuid": "593409a4-2523-4666-8598-d6a0cf7dba35", 00:14:49.740 "strip_size_kb": 0, 00:14:49.740 "state": "online", 00:14:49.740 "raid_level": "raid1", 00:14:49.740 "superblock": true, 00:14:49.740 "num_base_bdevs": 4, 00:14:49.740 "num_base_bdevs_discovered": 3, 00:14:49.740 "num_base_bdevs_operational": 3, 00:14:49.740 "base_bdevs_list": [ 00:14:49.740 { 00:14:49.740 "name": "spare", 00:14:49.740 "uuid": "e7ee7157-dc1d-59bc-adb3-8f6cbad685a7", 00:14:49.740 "is_configured": true, 00:14:49.740 "data_offset": 2048, 00:14:49.740 "data_size": 63488 00:14:49.740 }, 00:14:49.740 { 00:14:49.740 "name": null, 00:14:49.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.740 "is_configured": false, 00:14:49.740 "data_offset": 2048, 00:14:49.740 "data_size": 63488 00:14:49.740 }, 00:14:49.740 { 00:14:49.740 "name": "BaseBdev3", 00:14:49.740 "uuid": "6d6d36a5-03a1-52f5-a21b-de3e1fda5c85", 00:14:49.740 "is_configured": true, 00:14:49.740 "data_offset": 2048, 00:14:49.740 "data_size": 63488 00:14:49.740 }, 00:14:49.740 { 00:14:49.740 "name": "BaseBdev4", 00:14:49.740 "uuid": "508f813c-58d0-5922-8f26-1b2963551f15", 00:14:49.740 "is_configured": true, 00:14:49.740 "data_offset": 2048, 00:14:49.740 "data_size": 63488 00:14:49.740 } 00:14:49.740 ] 00:14:49.740 }' 00:14:49.740 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.740 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:49.740 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.740 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:49.740 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:49.740 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.740 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.740 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.740 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.740 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:49.740 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:49.740 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.740 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.740 [2024-10-15 09:14:07.633357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:49.999 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.999 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:49.999 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.999 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.999 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.999 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.999 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:49.999 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.999 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.999 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.999 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.999 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.999 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.999 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.999 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.999 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.999 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.999 "name": "raid_bdev1", 00:14:49.999 "uuid": "593409a4-2523-4666-8598-d6a0cf7dba35", 00:14:49.999 "strip_size_kb": 0, 00:14:49.999 "state": "online", 00:14:49.999 "raid_level": "raid1", 00:14:49.999 "superblock": true, 00:14:49.999 "num_base_bdevs": 4, 00:14:49.999 "num_base_bdevs_discovered": 2, 00:14:49.999 "num_base_bdevs_operational": 2, 00:14:49.999 "base_bdevs_list": [ 00:14:49.999 { 00:14:49.999 "name": null, 00:14:49.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.999 "is_configured": false, 00:14:49.999 "data_offset": 0, 00:14:49.999 "data_size": 63488 00:14:49.999 }, 00:14:49.999 { 00:14:49.999 "name": null, 00:14:49.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.999 "is_configured": false, 00:14:49.999 "data_offset": 2048, 00:14:49.999 "data_size": 63488 00:14:49.999 }, 00:14:49.999 { 00:14:49.999 "name": "BaseBdev3", 00:14:49.999 "uuid": "6d6d36a5-03a1-52f5-a21b-de3e1fda5c85", 00:14:49.999 "is_configured": true, 00:14:49.999 "data_offset": 2048, 00:14:49.999 "data_size": 63488 00:14:49.999 }, 00:14:49.999 { 00:14:49.999 "name": "BaseBdev4", 00:14:49.999 "uuid": "508f813c-58d0-5922-8f26-1b2963551f15", 00:14:49.999 "is_configured": true, 00:14:49.999 "data_offset": 2048, 00:14:49.999 "data_size": 63488 00:14:49.999 } 00:14:49.999 ] 00:14:49.999 }' 00:14:49.999 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.999 09:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.257 09:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:50.257 09:14:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.257 09:14:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.257 [2024-10-15 09:14:08.148891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:50.257 [2024-10-15 09:14:08.149260] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:50.257 [2024-10-15 09:14:08.149358] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:50.257 [2024-10-15 09:14:08.149529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:50.516 [2024-10-15 09:14:08.167875] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:14:50.516 09:14:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.516 09:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:50.516 [2024-10-15 09:14:08.170259] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:51.454 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:51.454 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.454 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:51.454 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:51.454 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.454 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.454 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.454 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.454 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.454 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.454 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.454 "name": "raid_bdev1", 00:14:51.454 "uuid": "593409a4-2523-4666-8598-d6a0cf7dba35", 00:14:51.454 "strip_size_kb": 0, 00:14:51.454 "state": "online", 00:14:51.454 "raid_level": "raid1", 00:14:51.454 "superblock": true, 00:14:51.454 "num_base_bdevs": 4, 00:14:51.454 "num_base_bdevs_discovered": 3, 00:14:51.454 "num_base_bdevs_operational": 3, 00:14:51.454 "process": { 00:14:51.454 "type": "rebuild", 00:14:51.454 "target": "spare", 00:14:51.454 "progress": { 00:14:51.454 "blocks": 20480, 00:14:51.454 "percent": 32 00:14:51.454 } 00:14:51.454 }, 00:14:51.454 "base_bdevs_list": [ 00:14:51.454 { 00:14:51.454 "name": "spare", 00:14:51.454 "uuid": "e7ee7157-dc1d-59bc-adb3-8f6cbad685a7", 00:14:51.454 "is_configured": true, 00:14:51.454 "data_offset": 2048, 00:14:51.454 "data_size": 63488 00:14:51.454 }, 00:14:51.454 { 00:14:51.454 "name": null, 00:14:51.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.454 "is_configured": false, 00:14:51.454 "data_offset": 2048, 00:14:51.454 "data_size": 63488 00:14:51.454 }, 00:14:51.454 { 00:14:51.454 "name": "BaseBdev3", 00:14:51.454 "uuid": "6d6d36a5-03a1-52f5-a21b-de3e1fda5c85", 00:14:51.454 "is_configured": true, 00:14:51.454 "data_offset": 2048, 00:14:51.454 "data_size": 63488 00:14:51.454 }, 00:14:51.454 { 00:14:51.454 "name": "BaseBdev4", 00:14:51.454 "uuid": "508f813c-58d0-5922-8f26-1b2963551f15", 00:14:51.454 "is_configured": true, 00:14:51.454 "data_offset": 2048, 00:14:51.454 "data_size": 63488 00:14:51.454 } 00:14:51.454 ] 00:14:51.454 }' 00:14:51.454 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.454 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:51.454 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.454 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:51.454 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:51.454 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.454 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.454 [2024-10-15 09:14:09.305980] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:51.713 [2024-10-15 09:14:09.376957] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:51.713 [2024-10-15 09:14:09.377183] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.713 [2024-10-15 09:14:09.377232] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:51.713 [2024-10-15 09:14:09.377261] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:51.713 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.713 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:51.713 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.713 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.713 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:51.713 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:51.713 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:51.713 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.713 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.713 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.713 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.713 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.713 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.713 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.713 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.713 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.713 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.713 "name": "raid_bdev1", 00:14:51.713 "uuid": "593409a4-2523-4666-8598-d6a0cf7dba35", 00:14:51.713 "strip_size_kb": 0, 00:14:51.713 "state": "online", 00:14:51.713 "raid_level": "raid1", 00:14:51.713 "superblock": true, 00:14:51.713 "num_base_bdevs": 4, 00:14:51.713 "num_base_bdevs_discovered": 2, 00:14:51.713 "num_base_bdevs_operational": 2, 00:14:51.713 "base_bdevs_list": [ 00:14:51.713 { 00:14:51.713 "name": null, 00:14:51.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.713 "is_configured": false, 00:14:51.713 "data_offset": 0, 00:14:51.713 "data_size": 63488 00:14:51.713 }, 00:14:51.713 { 00:14:51.713 "name": null, 00:14:51.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.713 "is_configured": false, 00:14:51.713 "data_offset": 2048, 00:14:51.713 "data_size": 63488 00:14:51.713 }, 00:14:51.713 { 00:14:51.713 "name": "BaseBdev3", 00:14:51.713 "uuid": "6d6d36a5-03a1-52f5-a21b-de3e1fda5c85", 00:14:51.713 "is_configured": true, 00:14:51.713 "data_offset": 2048, 00:14:51.713 "data_size": 63488 00:14:51.713 }, 00:14:51.713 { 00:14:51.713 "name": "BaseBdev4", 00:14:51.713 "uuid": "508f813c-58d0-5922-8f26-1b2963551f15", 00:14:51.713 "is_configured": true, 00:14:51.713 "data_offset": 2048, 00:14:51.713 "data_size": 63488 00:14:51.713 } 00:14:51.713 ] 00:14:51.713 }' 00:14:51.713 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.713 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.280 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:52.280 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.280 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.280 [2024-10-15 09:14:09.891922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:52.280 [2024-10-15 09:14:09.892014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.280 [2024-10-15 09:14:09.892048] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:52.280 [2024-10-15 09:14:09.892063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.280 [2024-10-15 09:14:09.892643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.280 [2024-10-15 09:14:09.892671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:52.280 [2024-10-15 09:14:09.892812] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:52.280 [2024-10-15 09:14:09.892832] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:52.280 [2024-10-15 09:14:09.892846] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:52.280 [2024-10-15 09:14:09.892872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:52.280 [2024-10-15 09:14:09.911153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:14:52.280 spare 00:14:52.280 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.280 [2024-10-15 09:14:09.913554] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:52.280 09:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:53.218 09:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.218 09:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.218 09:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.218 09:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.218 09:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.218 09:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.218 09:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.218 09:14:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.218 09:14:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.218 09:14:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.218 09:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.218 "name": "raid_bdev1", 00:14:53.218 "uuid": "593409a4-2523-4666-8598-d6a0cf7dba35", 00:14:53.218 "strip_size_kb": 0, 00:14:53.218 "state": "online", 00:14:53.218 "raid_level": "raid1", 00:14:53.218 "superblock": true, 00:14:53.218 "num_base_bdevs": 4, 00:14:53.218 "num_base_bdevs_discovered": 3, 00:14:53.218 "num_base_bdevs_operational": 3, 00:14:53.218 "process": { 00:14:53.218 "type": "rebuild", 00:14:53.218 "target": "spare", 00:14:53.218 "progress": { 00:14:53.218 "blocks": 20480, 00:14:53.218 "percent": 32 00:14:53.218 } 00:14:53.218 }, 00:14:53.218 "base_bdevs_list": [ 00:14:53.219 { 00:14:53.219 "name": "spare", 00:14:53.219 "uuid": "e7ee7157-dc1d-59bc-adb3-8f6cbad685a7", 00:14:53.219 "is_configured": true, 00:14:53.219 "data_offset": 2048, 00:14:53.219 "data_size": 63488 00:14:53.219 }, 00:14:53.219 { 00:14:53.219 "name": null, 00:14:53.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.219 "is_configured": false, 00:14:53.219 "data_offset": 2048, 00:14:53.219 "data_size": 63488 00:14:53.219 }, 00:14:53.219 { 00:14:53.219 "name": "BaseBdev3", 00:14:53.219 "uuid": "6d6d36a5-03a1-52f5-a21b-de3e1fda5c85", 00:14:53.219 "is_configured": true, 00:14:53.219 "data_offset": 2048, 00:14:53.219 "data_size": 63488 00:14:53.219 }, 00:14:53.219 { 00:14:53.219 "name": "BaseBdev4", 00:14:53.219 "uuid": "508f813c-58d0-5922-8f26-1b2963551f15", 00:14:53.219 "is_configured": true, 00:14:53.219 "data_offset": 2048, 00:14:53.219 "data_size": 63488 00:14:53.219 } 00:14:53.219 ] 00:14:53.219 }' 00:14:53.219 09:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.219 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.219 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.219 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.219 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:53.219 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.219 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.219 [2024-10-15 09:14:11.073246] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:53.478 [2024-10-15 09:14:11.120074] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:53.478 [2024-10-15 09:14:11.120189] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.478 [2024-10-15 09:14:11.120222] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:53.478 [2024-10-15 09:14:11.120231] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:53.478 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.478 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:53.478 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.478 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.478 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.478 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.478 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:53.478 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.478 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.478 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.478 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.478 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.478 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.478 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.478 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.478 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.478 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.478 "name": "raid_bdev1", 00:14:53.478 "uuid": "593409a4-2523-4666-8598-d6a0cf7dba35", 00:14:53.478 "strip_size_kb": 0, 00:14:53.478 "state": "online", 00:14:53.478 "raid_level": "raid1", 00:14:53.478 "superblock": true, 00:14:53.478 "num_base_bdevs": 4, 00:14:53.478 "num_base_bdevs_discovered": 2, 00:14:53.478 "num_base_bdevs_operational": 2, 00:14:53.478 "base_bdevs_list": [ 00:14:53.478 { 00:14:53.478 "name": null, 00:14:53.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.478 "is_configured": false, 00:14:53.478 "data_offset": 0, 00:14:53.478 "data_size": 63488 00:14:53.478 }, 00:14:53.478 { 00:14:53.478 "name": null, 00:14:53.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.478 "is_configured": false, 00:14:53.478 "data_offset": 2048, 00:14:53.478 "data_size": 63488 00:14:53.478 }, 00:14:53.478 { 00:14:53.478 "name": "BaseBdev3", 00:14:53.478 "uuid": "6d6d36a5-03a1-52f5-a21b-de3e1fda5c85", 00:14:53.478 "is_configured": true, 00:14:53.478 "data_offset": 2048, 00:14:53.478 "data_size": 63488 00:14:53.478 }, 00:14:53.478 { 00:14:53.478 "name": "BaseBdev4", 00:14:53.478 "uuid": "508f813c-58d0-5922-8f26-1b2963551f15", 00:14:53.478 "is_configured": true, 00:14:53.478 "data_offset": 2048, 00:14:53.478 "data_size": 63488 00:14:53.478 } 00:14:53.478 ] 00:14:53.478 }' 00:14:53.478 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.478 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.738 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:53.738 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.738 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:53.738 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:53.738 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.738 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.738 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.738 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.738 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.738 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.738 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.738 "name": "raid_bdev1", 00:14:53.738 "uuid": "593409a4-2523-4666-8598-d6a0cf7dba35", 00:14:53.738 "strip_size_kb": 0, 00:14:53.738 "state": "online", 00:14:53.738 "raid_level": "raid1", 00:14:53.738 "superblock": true, 00:14:53.738 "num_base_bdevs": 4, 00:14:53.738 "num_base_bdevs_discovered": 2, 00:14:53.738 "num_base_bdevs_operational": 2, 00:14:53.738 "base_bdevs_list": [ 00:14:53.738 { 00:14:53.738 "name": null, 00:14:53.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.738 "is_configured": false, 00:14:53.738 "data_offset": 0, 00:14:53.738 "data_size": 63488 00:14:53.738 }, 00:14:53.738 { 00:14:53.738 "name": null, 00:14:53.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.738 "is_configured": false, 00:14:53.738 "data_offset": 2048, 00:14:53.738 "data_size": 63488 00:14:53.738 }, 00:14:53.738 { 00:14:53.738 "name": "BaseBdev3", 00:14:53.738 "uuid": "6d6d36a5-03a1-52f5-a21b-de3e1fda5c85", 00:14:53.738 "is_configured": true, 00:14:53.738 "data_offset": 2048, 00:14:53.738 "data_size": 63488 00:14:53.738 }, 00:14:53.738 { 00:14:53.738 "name": "BaseBdev4", 00:14:53.738 "uuid": "508f813c-58d0-5922-8f26-1b2963551f15", 00:14:53.738 "is_configured": true, 00:14:53.738 "data_offset": 2048, 00:14:53.738 "data_size": 63488 00:14:53.738 } 00:14:53.738 ] 00:14:53.738 }' 00:14:53.738 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.998 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:53.998 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.998 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:53.998 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:53.998 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.998 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.998 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.998 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:53.998 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.998 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.998 [2024-10-15 09:14:11.746389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:53.998 [2024-10-15 09:14:11.746482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:53.998 [2024-10-15 09:14:11.746511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:14:53.998 [2024-10-15 09:14:11.746522] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:53.998 [2024-10-15 09:14:11.747053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:53.998 [2024-10-15 09:14:11.747177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:53.998 [2024-10-15 09:14:11.747289] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:53.998 [2024-10-15 09:14:11.747304] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:53.998 [2024-10-15 09:14:11.747314] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:53.998 [2024-10-15 09:14:11.747326] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:53.998 BaseBdev1 00:14:53.998 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.998 09:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:54.957 09:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:54.957 09:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.957 09:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.957 09:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.957 09:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.957 09:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:54.957 09:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.957 09:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.957 09:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.957 09:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.957 09:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.957 09:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.957 09:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.957 09:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.957 09:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.957 09:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.957 "name": "raid_bdev1", 00:14:54.957 "uuid": "593409a4-2523-4666-8598-d6a0cf7dba35", 00:14:54.957 "strip_size_kb": 0, 00:14:54.957 "state": "online", 00:14:54.957 "raid_level": "raid1", 00:14:54.957 "superblock": true, 00:14:54.957 "num_base_bdevs": 4, 00:14:54.957 "num_base_bdevs_discovered": 2, 00:14:54.957 "num_base_bdevs_operational": 2, 00:14:54.957 "base_bdevs_list": [ 00:14:54.957 { 00:14:54.957 "name": null, 00:14:54.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.957 "is_configured": false, 00:14:54.957 "data_offset": 0, 00:14:54.957 "data_size": 63488 00:14:54.957 }, 00:14:54.957 { 00:14:54.957 "name": null, 00:14:54.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.957 "is_configured": false, 00:14:54.957 "data_offset": 2048, 00:14:54.957 "data_size": 63488 00:14:54.957 }, 00:14:54.957 { 00:14:54.957 "name": "BaseBdev3", 00:14:54.957 "uuid": "6d6d36a5-03a1-52f5-a21b-de3e1fda5c85", 00:14:54.957 "is_configured": true, 00:14:54.957 "data_offset": 2048, 00:14:54.957 "data_size": 63488 00:14:54.957 }, 00:14:54.957 { 00:14:54.957 "name": "BaseBdev4", 00:14:54.957 "uuid": "508f813c-58d0-5922-8f26-1b2963551f15", 00:14:54.957 "is_configured": true, 00:14:54.957 "data_offset": 2048, 00:14:54.957 "data_size": 63488 00:14:54.957 } 00:14:54.957 ] 00:14:54.957 }' 00:14:54.957 09:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.957 09:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.524 09:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:55.524 09:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.524 09:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:55.524 09:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:55.525 09:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.525 09:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.525 09:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.525 09:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.525 09:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.525 09:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.525 09:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.525 "name": "raid_bdev1", 00:14:55.525 "uuid": "593409a4-2523-4666-8598-d6a0cf7dba35", 00:14:55.525 "strip_size_kb": 0, 00:14:55.525 "state": "online", 00:14:55.525 "raid_level": "raid1", 00:14:55.525 "superblock": true, 00:14:55.525 "num_base_bdevs": 4, 00:14:55.525 "num_base_bdevs_discovered": 2, 00:14:55.525 "num_base_bdevs_operational": 2, 00:14:55.525 "base_bdevs_list": [ 00:14:55.525 { 00:14:55.525 "name": null, 00:14:55.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.525 "is_configured": false, 00:14:55.525 "data_offset": 0, 00:14:55.525 "data_size": 63488 00:14:55.525 }, 00:14:55.525 { 00:14:55.525 "name": null, 00:14:55.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.525 "is_configured": false, 00:14:55.525 "data_offset": 2048, 00:14:55.525 "data_size": 63488 00:14:55.525 }, 00:14:55.525 { 00:14:55.525 "name": "BaseBdev3", 00:14:55.525 "uuid": "6d6d36a5-03a1-52f5-a21b-de3e1fda5c85", 00:14:55.525 "is_configured": true, 00:14:55.525 "data_offset": 2048, 00:14:55.525 "data_size": 63488 00:14:55.525 }, 00:14:55.525 { 00:14:55.525 "name": "BaseBdev4", 00:14:55.525 "uuid": "508f813c-58d0-5922-8f26-1b2963551f15", 00:14:55.525 "is_configured": true, 00:14:55.525 "data_offset": 2048, 00:14:55.525 "data_size": 63488 00:14:55.525 } 00:14:55.525 ] 00:14:55.525 }' 00:14:55.525 09:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.525 09:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:55.525 09:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.525 09:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:55.525 09:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:55.525 09:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:14:55.525 09:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:55.525 09:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:55.525 09:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:55.525 09:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:55.525 09:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:55.525 09:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:55.525 09:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.525 09:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.525 [2024-10-15 09:14:13.364889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:55.525 [2024-10-15 09:14:13.365165] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:55.525 [2024-10-15 09:14:13.365190] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:55.525 request: 00:14:55.525 { 00:14:55.525 "base_bdev": "BaseBdev1", 00:14:55.525 "raid_bdev": "raid_bdev1", 00:14:55.525 "method": "bdev_raid_add_base_bdev", 00:14:55.525 "req_id": 1 00:14:55.525 } 00:14:55.525 Got JSON-RPC error response 00:14:55.525 response: 00:14:55.525 { 00:14:55.525 "code": -22, 00:14:55.525 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:55.525 } 00:14:55.525 09:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:55.525 09:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:14:55.525 09:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:55.525 09:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:55.525 09:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:55.525 09:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:56.517 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:56.518 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.518 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.518 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.518 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.518 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:56.518 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.518 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.518 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.518 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.518 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.518 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.518 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.518 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.518 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.777 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.777 "name": "raid_bdev1", 00:14:56.777 "uuid": "593409a4-2523-4666-8598-d6a0cf7dba35", 00:14:56.777 "strip_size_kb": 0, 00:14:56.777 "state": "online", 00:14:56.777 "raid_level": "raid1", 00:14:56.777 "superblock": true, 00:14:56.777 "num_base_bdevs": 4, 00:14:56.777 "num_base_bdevs_discovered": 2, 00:14:56.777 "num_base_bdevs_operational": 2, 00:14:56.777 "base_bdevs_list": [ 00:14:56.777 { 00:14:56.777 "name": null, 00:14:56.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.777 "is_configured": false, 00:14:56.777 "data_offset": 0, 00:14:56.777 "data_size": 63488 00:14:56.777 }, 00:14:56.777 { 00:14:56.777 "name": null, 00:14:56.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.777 "is_configured": false, 00:14:56.777 "data_offset": 2048, 00:14:56.777 "data_size": 63488 00:14:56.777 }, 00:14:56.777 { 00:14:56.777 "name": "BaseBdev3", 00:14:56.777 "uuid": "6d6d36a5-03a1-52f5-a21b-de3e1fda5c85", 00:14:56.777 "is_configured": true, 00:14:56.777 "data_offset": 2048, 00:14:56.777 "data_size": 63488 00:14:56.777 }, 00:14:56.777 { 00:14:56.777 "name": "BaseBdev4", 00:14:56.777 "uuid": "508f813c-58d0-5922-8f26-1b2963551f15", 00:14:56.777 "is_configured": true, 00:14:56.777 "data_offset": 2048, 00:14:56.777 "data_size": 63488 00:14:56.777 } 00:14:56.777 ] 00:14:56.777 }' 00:14:56.777 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.777 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.037 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:57.037 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:57.037 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:57.037 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:57.037 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:57.037 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.037 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.037 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.037 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.037 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.037 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.037 "name": "raid_bdev1", 00:14:57.037 "uuid": "593409a4-2523-4666-8598-d6a0cf7dba35", 00:14:57.037 "strip_size_kb": 0, 00:14:57.037 "state": "online", 00:14:57.037 "raid_level": "raid1", 00:14:57.037 "superblock": true, 00:14:57.037 "num_base_bdevs": 4, 00:14:57.037 "num_base_bdevs_discovered": 2, 00:14:57.037 "num_base_bdevs_operational": 2, 00:14:57.037 "base_bdevs_list": [ 00:14:57.037 { 00:14:57.037 "name": null, 00:14:57.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.037 "is_configured": false, 00:14:57.037 "data_offset": 0, 00:14:57.037 "data_size": 63488 00:14:57.037 }, 00:14:57.037 { 00:14:57.037 "name": null, 00:14:57.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.037 "is_configured": false, 00:14:57.037 "data_offset": 2048, 00:14:57.037 "data_size": 63488 00:14:57.037 }, 00:14:57.037 { 00:14:57.037 "name": "BaseBdev3", 00:14:57.037 "uuid": "6d6d36a5-03a1-52f5-a21b-de3e1fda5c85", 00:14:57.037 "is_configured": true, 00:14:57.037 "data_offset": 2048, 00:14:57.037 "data_size": 63488 00:14:57.037 }, 00:14:57.037 { 00:14:57.037 "name": "BaseBdev4", 00:14:57.037 "uuid": "508f813c-58d0-5922-8f26-1b2963551f15", 00:14:57.037 "is_configured": true, 00:14:57.037 "data_offset": 2048, 00:14:57.037 "data_size": 63488 00:14:57.037 } 00:14:57.037 ] 00:14:57.037 }' 00:14:57.037 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.037 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:57.037 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.297 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:57.297 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79372 00:14:57.297 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 79372 ']' 00:14:57.297 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 79372 00:14:57.297 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:14:57.297 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:57.297 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79372 00:14:57.297 killing process with pid 79372 00:14:57.297 Received shutdown signal, test time was about 19.770624 seconds 00:14:57.297 00:14:57.297 Latency(us) 00:14:57.297 [2024-10-15T09:14:15.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.297 [2024-10-15T09:14:15.193Z] =================================================================================================================== 00:14:57.297 [2024-10-15T09:14:15.193Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:57.297 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:57.297 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:57.297 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79372' 00:14:57.297 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 79372 00:14:57.297 09:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 79372 00:14:57.297 [2024-10-15 09:14:14.984271] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:57.297 [2024-10-15 09:14:14.984447] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:57.297 [2024-10-15 09:14:14.984541] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:57.297 [2024-10-15 09:14:14.984559] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:57.866 [2024-10-15 09:14:15.490747] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:59.245 09:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:59.245 00:14:59.245 real 0m23.698s 00:14:59.245 user 0m30.695s 00:14:59.245 sys 0m2.886s 00:14:59.245 09:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:59.245 ************************************ 00:14:59.245 END TEST raid_rebuild_test_sb_io 00:14:59.245 ************************************ 00:14:59.245 09:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.245 09:14:16 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:59.245 09:14:16 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:14:59.245 09:14:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:59.245 09:14:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:59.245 09:14:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:59.245 ************************************ 00:14:59.245 START TEST raid5f_state_function_test 00:14:59.245 ************************************ 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:59.245 Process raid pid: 80127 00:14:59.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80127 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80127' 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80127 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 80127 ']' 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:59.245 09:14:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.245 [2024-10-15 09:14:17.097765] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:14:59.245 [2024-10-15 09:14:17.099075] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.504 [2024-10-15 09:14:17.292020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.763 [2024-10-15 09:14:17.457782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.021 [2024-10-15 09:14:17.757290] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:00.021 [2024-10-15 09:14:17.757475] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:00.278 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:00.278 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:15:00.278 09:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:00.278 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.278 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.278 [2024-10-15 09:14:18.038232] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:00.278 [2024-10-15 09:14:18.038303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:00.278 [2024-10-15 09:14:18.038315] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:00.278 [2024-10-15 09:14:18.038328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:00.278 [2024-10-15 09:14:18.038336] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:00.278 [2024-10-15 09:14:18.038348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:00.278 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.278 09:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:00.278 09:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.278 09:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.278 09:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.278 09:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.278 09:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.278 09:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.278 09:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.278 09:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.278 09:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.278 09:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.278 09:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.278 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.278 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.278 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.278 09:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.278 "name": "Existed_Raid", 00:15:00.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.278 "strip_size_kb": 64, 00:15:00.278 "state": "configuring", 00:15:00.278 "raid_level": "raid5f", 00:15:00.278 "superblock": false, 00:15:00.278 "num_base_bdevs": 3, 00:15:00.278 "num_base_bdevs_discovered": 0, 00:15:00.278 "num_base_bdevs_operational": 3, 00:15:00.278 "base_bdevs_list": [ 00:15:00.278 { 00:15:00.278 "name": "BaseBdev1", 00:15:00.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.278 "is_configured": false, 00:15:00.278 "data_offset": 0, 00:15:00.278 "data_size": 0 00:15:00.278 }, 00:15:00.278 { 00:15:00.278 "name": "BaseBdev2", 00:15:00.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.278 "is_configured": false, 00:15:00.278 "data_offset": 0, 00:15:00.278 "data_size": 0 00:15:00.278 }, 00:15:00.278 { 00:15:00.278 "name": "BaseBdev3", 00:15:00.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.278 "is_configured": false, 00:15:00.278 "data_offset": 0, 00:15:00.278 "data_size": 0 00:15:00.278 } 00:15:00.278 ] 00:15:00.278 }' 00:15:00.278 09:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.278 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.845 [2024-10-15 09:14:18.497900] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:00.845 [2024-10-15 09:14:18.498012] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.845 [2024-10-15 09:14:18.509937] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:00.845 [2024-10-15 09:14:18.510048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:00.845 [2024-10-15 09:14:18.510099] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:00.845 [2024-10-15 09:14:18.510144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:00.845 [2024-10-15 09:14:18.510177] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:00.845 [2024-10-15 09:14:18.510216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.845 [2024-10-15 09:14:18.577236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:00.845 BaseBdev1 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.845 [ 00:15:00.845 { 00:15:00.845 "name": "BaseBdev1", 00:15:00.845 "aliases": [ 00:15:00.845 "ff750dba-7e74-4d58-a46b-b4358b0062fa" 00:15:00.845 ], 00:15:00.845 "product_name": "Malloc disk", 00:15:00.845 "block_size": 512, 00:15:00.845 "num_blocks": 65536, 00:15:00.845 "uuid": "ff750dba-7e74-4d58-a46b-b4358b0062fa", 00:15:00.845 "assigned_rate_limits": { 00:15:00.845 "rw_ios_per_sec": 0, 00:15:00.845 "rw_mbytes_per_sec": 0, 00:15:00.845 "r_mbytes_per_sec": 0, 00:15:00.845 "w_mbytes_per_sec": 0 00:15:00.845 }, 00:15:00.845 "claimed": true, 00:15:00.845 "claim_type": "exclusive_write", 00:15:00.845 "zoned": false, 00:15:00.845 "supported_io_types": { 00:15:00.845 "read": true, 00:15:00.845 "write": true, 00:15:00.845 "unmap": true, 00:15:00.845 "flush": true, 00:15:00.845 "reset": true, 00:15:00.845 "nvme_admin": false, 00:15:00.845 "nvme_io": false, 00:15:00.845 "nvme_io_md": false, 00:15:00.845 "write_zeroes": true, 00:15:00.845 "zcopy": true, 00:15:00.845 "get_zone_info": false, 00:15:00.845 "zone_management": false, 00:15:00.845 "zone_append": false, 00:15:00.845 "compare": false, 00:15:00.845 "compare_and_write": false, 00:15:00.845 "abort": true, 00:15:00.845 "seek_hole": false, 00:15:00.845 "seek_data": false, 00:15:00.845 "copy": true, 00:15:00.845 "nvme_iov_md": false 00:15:00.845 }, 00:15:00.845 "memory_domains": [ 00:15:00.845 { 00:15:00.845 "dma_device_id": "system", 00:15:00.845 "dma_device_type": 1 00:15:00.845 }, 00:15:00.845 { 00:15:00.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.845 "dma_device_type": 2 00:15:00.845 } 00:15:00.845 ], 00:15:00.845 "driver_specific": {} 00:15:00.845 } 00:15:00.845 ] 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.845 09:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.845 "name": "Existed_Raid", 00:15:00.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.845 "strip_size_kb": 64, 00:15:00.845 "state": "configuring", 00:15:00.845 "raid_level": "raid5f", 00:15:00.845 "superblock": false, 00:15:00.845 "num_base_bdevs": 3, 00:15:00.846 "num_base_bdevs_discovered": 1, 00:15:00.846 "num_base_bdevs_operational": 3, 00:15:00.846 "base_bdevs_list": [ 00:15:00.846 { 00:15:00.846 "name": "BaseBdev1", 00:15:00.846 "uuid": "ff750dba-7e74-4d58-a46b-b4358b0062fa", 00:15:00.846 "is_configured": true, 00:15:00.846 "data_offset": 0, 00:15:00.846 "data_size": 65536 00:15:00.846 }, 00:15:00.846 { 00:15:00.846 "name": "BaseBdev2", 00:15:00.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.846 "is_configured": false, 00:15:00.846 "data_offset": 0, 00:15:00.846 "data_size": 0 00:15:00.846 }, 00:15:00.846 { 00:15:00.846 "name": "BaseBdev3", 00:15:00.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.846 "is_configured": false, 00:15:00.846 "data_offset": 0, 00:15:00.846 "data_size": 0 00:15:00.846 } 00:15:00.846 ] 00:15:00.846 }' 00:15:00.846 09:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.846 09:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.412 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:01.412 09:14:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.412 09:14:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.412 [2024-10-15 09:14:19.108572] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:01.412 [2024-10-15 09:14:19.108731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:01.412 09:14:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.412 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:01.412 09:14:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.412 09:14:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.412 [2024-10-15 09:14:19.116636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:01.412 [2024-10-15 09:14:19.119275] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:01.412 [2024-10-15 09:14:19.119329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:01.412 [2024-10-15 09:14:19.119342] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:01.412 [2024-10-15 09:14:19.119353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:01.412 09:14:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.412 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:01.412 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:01.412 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:01.412 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.412 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.412 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.413 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.413 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.413 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.413 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.413 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.413 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.413 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.413 09:14:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.413 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.413 09:14:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.413 09:14:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.413 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.413 "name": "Existed_Raid", 00:15:01.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.413 "strip_size_kb": 64, 00:15:01.413 "state": "configuring", 00:15:01.413 "raid_level": "raid5f", 00:15:01.413 "superblock": false, 00:15:01.413 "num_base_bdevs": 3, 00:15:01.413 "num_base_bdevs_discovered": 1, 00:15:01.413 "num_base_bdevs_operational": 3, 00:15:01.413 "base_bdevs_list": [ 00:15:01.413 { 00:15:01.413 "name": "BaseBdev1", 00:15:01.413 "uuid": "ff750dba-7e74-4d58-a46b-b4358b0062fa", 00:15:01.413 "is_configured": true, 00:15:01.413 "data_offset": 0, 00:15:01.413 "data_size": 65536 00:15:01.413 }, 00:15:01.413 { 00:15:01.413 "name": "BaseBdev2", 00:15:01.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.413 "is_configured": false, 00:15:01.413 "data_offset": 0, 00:15:01.413 "data_size": 0 00:15:01.413 }, 00:15:01.413 { 00:15:01.413 "name": "BaseBdev3", 00:15:01.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.413 "is_configured": false, 00:15:01.413 "data_offset": 0, 00:15:01.413 "data_size": 0 00:15:01.413 } 00:15:01.413 ] 00:15:01.413 }' 00:15:01.413 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.413 09:14:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.671 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:01.671 09:14:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.671 09:14:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.929 [2024-10-15 09:14:19.576356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:01.929 BaseBdev2 00:15:01.929 09:14:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.929 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:01.929 09:14:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:01.929 09:14:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:01.929 09:14:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:01.929 09:14:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:01.929 09:14:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:01.929 09:14:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:01.929 09:14:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.929 09:14:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.929 09:14:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.929 09:14:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:01.929 09:14:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.929 09:14:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.930 [ 00:15:01.930 { 00:15:01.930 "name": "BaseBdev2", 00:15:01.930 "aliases": [ 00:15:01.930 "69e02ce3-9c50-4c28-8e7c-c1155a87943a" 00:15:01.930 ], 00:15:01.930 "product_name": "Malloc disk", 00:15:01.930 "block_size": 512, 00:15:01.930 "num_blocks": 65536, 00:15:01.930 "uuid": "69e02ce3-9c50-4c28-8e7c-c1155a87943a", 00:15:01.930 "assigned_rate_limits": { 00:15:01.930 "rw_ios_per_sec": 0, 00:15:01.930 "rw_mbytes_per_sec": 0, 00:15:01.930 "r_mbytes_per_sec": 0, 00:15:01.930 "w_mbytes_per_sec": 0 00:15:01.930 }, 00:15:01.930 "claimed": true, 00:15:01.930 "claim_type": "exclusive_write", 00:15:01.930 "zoned": false, 00:15:01.930 "supported_io_types": { 00:15:01.930 "read": true, 00:15:01.930 "write": true, 00:15:01.930 "unmap": true, 00:15:01.930 "flush": true, 00:15:01.930 "reset": true, 00:15:01.930 "nvme_admin": false, 00:15:01.930 "nvme_io": false, 00:15:01.930 "nvme_io_md": false, 00:15:01.930 "write_zeroes": true, 00:15:01.930 "zcopy": true, 00:15:01.930 "get_zone_info": false, 00:15:01.930 "zone_management": false, 00:15:01.930 "zone_append": false, 00:15:01.930 "compare": false, 00:15:01.930 "compare_and_write": false, 00:15:01.930 "abort": true, 00:15:01.930 "seek_hole": false, 00:15:01.930 "seek_data": false, 00:15:01.930 "copy": true, 00:15:01.930 "nvme_iov_md": false 00:15:01.930 }, 00:15:01.930 "memory_domains": [ 00:15:01.930 { 00:15:01.930 "dma_device_id": "system", 00:15:01.930 "dma_device_type": 1 00:15:01.930 }, 00:15:01.930 { 00:15:01.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.930 "dma_device_type": 2 00:15:01.930 } 00:15:01.930 ], 00:15:01.930 "driver_specific": {} 00:15:01.930 } 00:15:01.930 ] 00:15:01.930 09:14:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.930 09:14:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:01.930 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:01.930 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:01.930 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:01.930 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.930 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.930 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.930 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.930 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.930 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.930 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.930 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.930 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.930 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.930 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.930 09:14:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.930 09:14:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.930 09:14:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.930 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.930 "name": "Existed_Raid", 00:15:01.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.930 "strip_size_kb": 64, 00:15:01.930 "state": "configuring", 00:15:01.930 "raid_level": "raid5f", 00:15:01.930 "superblock": false, 00:15:01.930 "num_base_bdevs": 3, 00:15:01.930 "num_base_bdevs_discovered": 2, 00:15:01.930 "num_base_bdevs_operational": 3, 00:15:01.930 "base_bdevs_list": [ 00:15:01.930 { 00:15:01.930 "name": "BaseBdev1", 00:15:01.930 "uuid": "ff750dba-7e74-4d58-a46b-b4358b0062fa", 00:15:01.930 "is_configured": true, 00:15:01.930 "data_offset": 0, 00:15:01.930 "data_size": 65536 00:15:01.930 }, 00:15:01.930 { 00:15:01.930 "name": "BaseBdev2", 00:15:01.930 "uuid": "69e02ce3-9c50-4c28-8e7c-c1155a87943a", 00:15:01.930 "is_configured": true, 00:15:01.930 "data_offset": 0, 00:15:01.930 "data_size": 65536 00:15:01.930 }, 00:15:01.930 { 00:15:01.930 "name": "BaseBdev3", 00:15:01.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.930 "is_configured": false, 00:15:01.930 "data_offset": 0, 00:15:01.930 "data_size": 0 00:15:01.930 } 00:15:01.930 ] 00:15:01.930 }' 00:15:01.930 09:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.930 09:14:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.190 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:02.190 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.190 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.449 [2024-10-15 09:14:20.125158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:02.449 [2024-10-15 09:14:20.125385] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:02.449 [2024-10-15 09:14:20.125428] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:02.449 [2024-10-15 09:14:20.125846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:02.449 [2024-10-15 09:14:20.132947] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:02.449 [2024-10-15 09:14:20.133031] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:02.449 [2024-10-15 09:14:20.133500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.449 BaseBdev3 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.449 [ 00:15:02.449 { 00:15:02.449 "name": "BaseBdev3", 00:15:02.449 "aliases": [ 00:15:02.449 "661af87e-4512-400b-ac80-2a1ae5cf055d" 00:15:02.449 ], 00:15:02.449 "product_name": "Malloc disk", 00:15:02.449 "block_size": 512, 00:15:02.449 "num_blocks": 65536, 00:15:02.449 "uuid": "661af87e-4512-400b-ac80-2a1ae5cf055d", 00:15:02.449 "assigned_rate_limits": { 00:15:02.449 "rw_ios_per_sec": 0, 00:15:02.449 "rw_mbytes_per_sec": 0, 00:15:02.449 "r_mbytes_per_sec": 0, 00:15:02.449 "w_mbytes_per_sec": 0 00:15:02.449 }, 00:15:02.449 "claimed": true, 00:15:02.449 "claim_type": "exclusive_write", 00:15:02.449 "zoned": false, 00:15:02.449 "supported_io_types": { 00:15:02.449 "read": true, 00:15:02.449 "write": true, 00:15:02.449 "unmap": true, 00:15:02.449 "flush": true, 00:15:02.449 "reset": true, 00:15:02.449 "nvme_admin": false, 00:15:02.449 "nvme_io": false, 00:15:02.449 "nvme_io_md": false, 00:15:02.449 "write_zeroes": true, 00:15:02.449 "zcopy": true, 00:15:02.449 "get_zone_info": false, 00:15:02.449 "zone_management": false, 00:15:02.449 "zone_append": false, 00:15:02.449 "compare": false, 00:15:02.449 "compare_and_write": false, 00:15:02.449 "abort": true, 00:15:02.449 "seek_hole": false, 00:15:02.449 "seek_data": false, 00:15:02.449 "copy": true, 00:15:02.449 "nvme_iov_md": false 00:15:02.449 }, 00:15:02.449 "memory_domains": [ 00:15:02.449 { 00:15:02.449 "dma_device_id": "system", 00:15:02.449 "dma_device_type": 1 00:15:02.449 }, 00:15:02.449 { 00:15:02.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.449 "dma_device_type": 2 00:15:02.449 } 00:15:02.449 ], 00:15:02.449 "driver_specific": {} 00:15:02.449 } 00:15:02.449 ] 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.449 "name": "Existed_Raid", 00:15:02.449 "uuid": "b8ee3ab6-bb95-4f6a-bdef-046c8c754930", 00:15:02.449 "strip_size_kb": 64, 00:15:02.449 "state": "online", 00:15:02.449 "raid_level": "raid5f", 00:15:02.449 "superblock": false, 00:15:02.449 "num_base_bdevs": 3, 00:15:02.449 "num_base_bdevs_discovered": 3, 00:15:02.449 "num_base_bdevs_operational": 3, 00:15:02.449 "base_bdevs_list": [ 00:15:02.449 { 00:15:02.449 "name": "BaseBdev1", 00:15:02.449 "uuid": "ff750dba-7e74-4d58-a46b-b4358b0062fa", 00:15:02.449 "is_configured": true, 00:15:02.449 "data_offset": 0, 00:15:02.449 "data_size": 65536 00:15:02.449 }, 00:15:02.449 { 00:15:02.449 "name": "BaseBdev2", 00:15:02.449 "uuid": "69e02ce3-9c50-4c28-8e7c-c1155a87943a", 00:15:02.449 "is_configured": true, 00:15:02.449 "data_offset": 0, 00:15:02.449 "data_size": 65536 00:15:02.449 }, 00:15:02.449 { 00:15:02.449 "name": "BaseBdev3", 00:15:02.449 "uuid": "661af87e-4512-400b-ac80-2a1ae5cf055d", 00:15:02.449 "is_configured": true, 00:15:02.449 "data_offset": 0, 00:15:02.449 "data_size": 65536 00:15:02.449 } 00:15:02.449 ] 00:15:02.449 }' 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.449 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.017 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:03.017 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:03.017 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:03.017 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:03.017 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:03.017 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:03.017 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:03.017 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:03.017 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.017 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.017 [2024-10-15 09:14:20.645760] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:03.017 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.017 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:03.017 "name": "Existed_Raid", 00:15:03.017 "aliases": [ 00:15:03.017 "b8ee3ab6-bb95-4f6a-bdef-046c8c754930" 00:15:03.017 ], 00:15:03.017 "product_name": "Raid Volume", 00:15:03.017 "block_size": 512, 00:15:03.017 "num_blocks": 131072, 00:15:03.017 "uuid": "b8ee3ab6-bb95-4f6a-bdef-046c8c754930", 00:15:03.017 "assigned_rate_limits": { 00:15:03.017 "rw_ios_per_sec": 0, 00:15:03.017 "rw_mbytes_per_sec": 0, 00:15:03.017 "r_mbytes_per_sec": 0, 00:15:03.017 "w_mbytes_per_sec": 0 00:15:03.017 }, 00:15:03.017 "claimed": false, 00:15:03.017 "zoned": false, 00:15:03.017 "supported_io_types": { 00:15:03.017 "read": true, 00:15:03.017 "write": true, 00:15:03.017 "unmap": false, 00:15:03.017 "flush": false, 00:15:03.017 "reset": true, 00:15:03.017 "nvme_admin": false, 00:15:03.017 "nvme_io": false, 00:15:03.017 "nvme_io_md": false, 00:15:03.017 "write_zeroes": true, 00:15:03.017 "zcopy": false, 00:15:03.017 "get_zone_info": false, 00:15:03.017 "zone_management": false, 00:15:03.017 "zone_append": false, 00:15:03.017 "compare": false, 00:15:03.017 "compare_and_write": false, 00:15:03.017 "abort": false, 00:15:03.017 "seek_hole": false, 00:15:03.017 "seek_data": false, 00:15:03.017 "copy": false, 00:15:03.017 "nvme_iov_md": false 00:15:03.017 }, 00:15:03.017 "driver_specific": { 00:15:03.017 "raid": { 00:15:03.017 "uuid": "b8ee3ab6-bb95-4f6a-bdef-046c8c754930", 00:15:03.017 "strip_size_kb": 64, 00:15:03.017 "state": "online", 00:15:03.017 "raid_level": "raid5f", 00:15:03.017 "superblock": false, 00:15:03.017 "num_base_bdevs": 3, 00:15:03.017 "num_base_bdevs_discovered": 3, 00:15:03.017 "num_base_bdevs_operational": 3, 00:15:03.017 "base_bdevs_list": [ 00:15:03.017 { 00:15:03.017 "name": "BaseBdev1", 00:15:03.017 "uuid": "ff750dba-7e74-4d58-a46b-b4358b0062fa", 00:15:03.017 "is_configured": true, 00:15:03.017 "data_offset": 0, 00:15:03.017 "data_size": 65536 00:15:03.017 }, 00:15:03.017 { 00:15:03.017 "name": "BaseBdev2", 00:15:03.017 "uuid": "69e02ce3-9c50-4c28-8e7c-c1155a87943a", 00:15:03.017 "is_configured": true, 00:15:03.017 "data_offset": 0, 00:15:03.017 "data_size": 65536 00:15:03.017 }, 00:15:03.017 { 00:15:03.017 "name": "BaseBdev3", 00:15:03.017 "uuid": "661af87e-4512-400b-ac80-2a1ae5cf055d", 00:15:03.017 "is_configured": true, 00:15:03.017 "data_offset": 0, 00:15:03.017 "data_size": 65536 00:15:03.017 } 00:15:03.017 ] 00:15:03.017 } 00:15:03.017 } 00:15:03.017 }' 00:15:03.017 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:03.017 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:03.017 BaseBdev2 00:15:03.017 BaseBdev3' 00:15:03.017 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.017 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:03.017 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.017 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:03.017 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.017 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.017 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.017 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.017 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.018 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.018 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.018 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.018 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:03.018 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.018 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.018 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.018 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.018 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.018 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.018 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.018 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:03.018 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.018 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.018 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.276 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.276 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.276 09:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:03.276 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.276 09:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.276 [2024-10-15 09:14:20.937101] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:03.276 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.276 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:03.276 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:03.276 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:03.276 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:03.276 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:03.276 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:03.276 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.276 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.276 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.276 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.276 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:03.276 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.276 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.276 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.276 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.276 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.276 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.276 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.276 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.276 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.276 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.276 "name": "Existed_Raid", 00:15:03.276 "uuid": "b8ee3ab6-bb95-4f6a-bdef-046c8c754930", 00:15:03.276 "strip_size_kb": 64, 00:15:03.276 "state": "online", 00:15:03.276 "raid_level": "raid5f", 00:15:03.276 "superblock": false, 00:15:03.276 "num_base_bdevs": 3, 00:15:03.276 "num_base_bdevs_discovered": 2, 00:15:03.276 "num_base_bdevs_operational": 2, 00:15:03.276 "base_bdevs_list": [ 00:15:03.276 { 00:15:03.276 "name": null, 00:15:03.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.276 "is_configured": false, 00:15:03.276 "data_offset": 0, 00:15:03.276 "data_size": 65536 00:15:03.276 }, 00:15:03.276 { 00:15:03.276 "name": "BaseBdev2", 00:15:03.276 "uuid": "69e02ce3-9c50-4c28-8e7c-c1155a87943a", 00:15:03.276 "is_configured": true, 00:15:03.276 "data_offset": 0, 00:15:03.276 "data_size": 65536 00:15:03.276 }, 00:15:03.276 { 00:15:03.276 "name": "BaseBdev3", 00:15:03.276 "uuid": "661af87e-4512-400b-ac80-2a1ae5cf055d", 00:15:03.276 "is_configured": true, 00:15:03.276 "data_offset": 0, 00:15:03.276 "data_size": 65536 00:15:03.276 } 00:15:03.276 ] 00:15:03.276 }' 00:15:03.276 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.276 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.843 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:03.843 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:03.843 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:03.843 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.843 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.843 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.843 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.843 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:03.843 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:03.843 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:03.843 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.843 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.843 [2024-10-15 09:14:21.586784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:03.843 [2024-10-15 09:14:21.586976] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:03.843 [2024-10-15 09:14:21.703325] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:03.843 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.843 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:03.843 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:03.843 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:03.843 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.843 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.843 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.843 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.102 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:04.102 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:04.102 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:04.102 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.102 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.102 [2024-10-15 09:14:21.759356] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:04.102 [2024-10-15 09:14:21.759440] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:04.102 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.102 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:04.102 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:04.102 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:04.102 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.102 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.102 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.102 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.102 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:04.102 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:04.103 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:04.103 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:04.103 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:04.103 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:04.103 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.103 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.103 BaseBdev2 00:15:04.103 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.103 09:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:04.103 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:04.103 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:04.103 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:04.103 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:04.103 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:04.103 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:04.103 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.103 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.103 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.103 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:04.103 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.103 09:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.103 [ 00:15:04.103 { 00:15:04.103 "name": "BaseBdev2", 00:15:04.103 "aliases": [ 00:15:04.103 "5bec0551-1ccc-49ec-ae31-57a7c80ccbbe" 00:15:04.103 ], 00:15:04.103 "product_name": "Malloc disk", 00:15:04.103 "block_size": 512, 00:15:04.103 "num_blocks": 65536, 00:15:04.103 "uuid": "5bec0551-1ccc-49ec-ae31-57a7c80ccbbe", 00:15:04.103 "assigned_rate_limits": { 00:15:04.103 "rw_ios_per_sec": 0, 00:15:04.103 "rw_mbytes_per_sec": 0, 00:15:04.103 "r_mbytes_per_sec": 0, 00:15:04.103 "w_mbytes_per_sec": 0 00:15:04.103 }, 00:15:04.103 "claimed": false, 00:15:04.103 "zoned": false, 00:15:04.103 "supported_io_types": { 00:15:04.103 "read": true, 00:15:04.103 "write": true, 00:15:04.103 "unmap": true, 00:15:04.103 "flush": true, 00:15:04.103 "reset": true, 00:15:04.103 "nvme_admin": false, 00:15:04.103 "nvme_io": false, 00:15:04.103 "nvme_io_md": false, 00:15:04.103 "write_zeroes": true, 00:15:04.103 "zcopy": true, 00:15:04.103 "get_zone_info": false, 00:15:04.103 "zone_management": false, 00:15:04.103 "zone_append": false, 00:15:04.103 "compare": false, 00:15:04.361 "compare_and_write": false, 00:15:04.361 "abort": true, 00:15:04.361 "seek_hole": false, 00:15:04.361 "seek_data": false, 00:15:04.361 "copy": true, 00:15:04.361 "nvme_iov_md": false 00:15:04.361 }, 00:15:04.361 "memory_domains": [ 00:15:04.362 { 00:15:04.362 "dma_device_id": "system", 00:15:04.362 "dma_device_type": 1 00:15:04.362 }, 00:15:04.362 { 00:15:04.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.362 "dma_device_type": 2 00:15:04.362 } 00:15:04.362 ], 00:15:04.362 "driver_specific": {} 00:15:04.362 } 00:15:04.362 ] 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.362 BaseBdev3 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.362 [ 00:15:04.362 { 00:15:04.362 "name": "BaseBdev3", 00:15:04.362 "aliases": [ 00:15:04.362 "c5097ede-3118-480c-a8a0-96365c99b937" 00:15:04.362 ], 00:15:04.362 "product_name": "Malloc disk", 00:15:04.362 "block_size": 512, 00:15:04.362 "num_blocks": 65536, 00:15:04.362 "uuid": "c5097ede-3118-480c-a8a0-96365c99b937", 00:15:04.362 "assigned_rate_limits": { 00:15:04.362 "rw_ios_per_sec": 0, 00:15:04.362 "rw_mbytes_per_sec": 0, 00:15:04.362 "r_mbytes_per_sec": 0, 00:15:04.362 "w_mbytes_per_sec": 0 00:15:04.362 }, 00:15:04.362 "claimed": false, 00:15:04.362 "zoned": false, 00:15:04.362 "supported_io_types": { 00:15:04.362 "read": true, 00:15:04.362 "write": true, 00:15:04.362 "unmap": true, 00:15:04.362 "flush": true, 00:15:04.362 "reset": true, 00:15:04.362 "nvme_admin": false, 00:15:04.362 "nvme_io": false, 00:15:04.362 "nvme_io_md": false, 00:15:04.362 "write_zeroes": true, 00:15:04.362 "zcopy": true, 00:15:04.362 "get_zone_info": false, 00:15:04.362 "zone_management": false, 00:15:04.362 "zone_append": false, 00:15:04.362 "compare": false, 00:15:04.362 "compare_and_write": false, 00:15:04.362 "abort": true, 00:15:04.362 "seek_hole": false, 00:15:04.362 "seek_data": false, 00:15:04.362 "copy": true, 00:15:04.362 "nvme_iov_md": false 00:15:04.362 }, 00:15:04.362 "memory_domains": [ 00:15:04.362 { 00:15:04.362 "dma_device_id": "system", 00:15:04.362 "dma_device_type": 1 00:15:04.362 }, 00:15:04.362 { 00:15:04.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.362 "dma_device_type": 2 00:15:04.362 } 00:15:04.362 ], 00:15:04.362 "driver_specific": {} 00:15:04.362 } 00:15:04.362 ] 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.362 [2024-10-15 09:14:22.092960] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:04.362 [2024-10-15 09:14:22.093153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:04.362 [2024-10-15 09:14:22.093217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:04.362 [2024-10-15 09:14:22.095720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.362 "name": "Existed_Raid", 00:15:04.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.362 "strip_size_kb": 64, 00:15:04.362 "state": "configuring", 00:15:04.362 "raid_level": "raid5f", 00:15:04.362 "superblock": false, 00:15:04.362 "num_base_bdevs": 3, 00:15:04.362 "num_base_bdevs_discovered": 2, 00:15:04.362 "num_base_bdevs_operational": 3, 00:15:04.362 "base_bdevs_list": [ 00:15:04.362 { 00:15:04.362 "name": "BaseBdev1", 00:15:04.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.362 "is_configured": false, 00:15:04.362 "data_offset": 0, 00:15:04.362 "data_size": 0 00:15:04.362 }, 00:15:04.362 { 00:15:04.362 "name": "BaseBdev2", 00:15:04.362 "uuid": "5bec0551-1ccc-49ec-ae31-57a7c80ccbbe", 00:15:04.362 "is_configured": true, 00:15:04.362 "data_offset": 0, 00:15:04.362 "data_size": 65536 00:15:04.362 }, 00:15:04.362 { 00:15:04.362 "name": "BaseBdev3", 00:15:04.362 "uuid": "c5097ede-3118-480c-a8a0-96365c99b937", 00:15:04.362 "is_configured": true, 00:15:04.362 "data_offset": 0, 00:15:04.362 "data_size": 65536 00:15:04.362 } 00:15:04.362 ] 00:15:04.362 }' 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.362 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.929 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:04.929 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.929 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.929 [2024-10-15 09:14:22.560388] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:04.929 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.929 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:04.929 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:04.929 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:04.929 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.929 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.929 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:04.929 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.929 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.929 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.929 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.929 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.929 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.929 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.929 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.929 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.929 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.929 "name": "Existed_Raid", 00:15:04.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.929 "strip_size_kb": 64, 00:15:04.929 "state": "configuring", 00:15:04.929 "raid_level": "raid5f", 00:15:04.929 "superblock": false, 00:15:04.929 "num_base_bdevs": 3, 00:15:04.929 "num_base_bdevs_discovered": 1, 00:15:04.929 "num_base_bdevs_operational": 3, 00:15:04.929 "base_bdevs_list": [ 00:15:04.929 { 00:15:04.929 "name": "BaseBdev1", 00:15:04.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.929 "is_configured": false, 00:15:04.929 "data_offset": 0, 00:15:04.929 "data_size": 0 00:15:04.929 }, 00:15:04.929 { 00:15:04.929 "name": null, 00:15:04.929 "uuid": "5bec0551-1ccc-49ec-ae31-57a7c80ccbbe", 00:15:04.929 "is_configured": false, 00:15:04.929 "data_offset": 0, 00:15:04.929 "data_size": 65536 00:15:04.929 }, 00:15:04.929 { 00:15:04.929 "name": "BaseBdev3", 00:15:04.929 "uuid": "c5097ede-3118-480c-a8a0-96365c99b937", 00:15:04.929 "is_configured": true, 00:15:04.929 "data_offset": 0, 00:15:04.929 "data_size": 65536 00:15:04.929 } 00:15:04.929 ] 00:15:04.929 }' 00:15:04.929 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.929 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.187 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.187 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.187 09:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.187 09:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:05.187 09:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.187 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:05.187 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:05.187 09:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.187 09:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.187 [2024-10-15 09:14:23.080467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:05.187 BaseBdev1 00:15:05.187 09:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.187 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:05.187 09:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:05.187 09:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:05.187 09:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:05.187 09:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:05.187 09:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:05.187 09:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:05.187 09:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.187 09:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.446 09:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.446 09:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:05.446 09:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.446 09:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.446 [ 00:15:05.446 { 00:15:05.446 "name": "BaseBdev1", 00:15:05.446 "aliases": [ 00:15:05.446 "b97e0493-f987-467e-ab11-ca6c17b59b27" 00:15:05.446 ], 00:15:05.446 "product_name": "Malloc disk", 00:15:05.446 "block_size": 512, 00:15:05.446 "num_blocks": 65536, 00:15:05.446 "uuid": "b97e0493-f987-467e-ab11-ca6c17b59b27", 00:15:05.446 "assigned_rate_limits": { 00:15:05.446 "rw_ios_per_sec": 0, 00:15:05.446 "rw_mbytes_per_sec": 0, 00:15:05.446 "r_mbytes_per_sec": 0, 00:15:05.446 "w_mbytes_per_sec": 0 00:15:05.446 }, 00:15:05.446 "claimed": true, 00:15:05.446 "claim_type": "exclusive_write", 00:15:05.446 "zoned": false, 00:15:05.446 "supported_io_types": { 00:15:05.446 "read": true, 00:15:05.446 "write": true, 00:15:05.446 "unmap": true, 00:15:05.446 "flush": true, 00:15:05.446 "reset": true, 00:15:05.446 "nvme_admin": false, 00:15:05.446 "nvme_io": false, 00:15:05.446 "nvme_io_md": false, 00:15:05.446 "write_zeroes": true, 00:15:05.446 "zcopy": true, 00:15:05.446 "get_zone_info": false, 00:15:05.446 "zone_management": false, 00:15:05.446 "zone_append": false, 00:15:05.446 "compare": false, 00:15:05.446 "compare_and_write": false, 00:15:05.446 "abort": true, 00:15:05.446 "seek_hole": false, 00:15:05.446 "seek_data": false, 00:15:05.446 "copy": true, 00:15:05.446 "nvme_iov_md": false 00:15:05.446 }, 00:15:05.446 "memory_domains": [ 00:15:05.446 { 00:15:05.446 "dma_device_id": "system", 00:15:05.446 "dma_device_type": 1 00:15:05.446 }, 00:15:05.446 { 00:15:05.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.446 "dma_device_type": 2 00:15:05.446 } 00:15:05.446 ], 00:15:05.446 "driver_specific": {} 00:15:05.446 } 00:15:05.446 ] 00:15:05.446 09:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.446 09:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:05.446 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:05.446 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:05.446 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.446 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.446 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.446 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.446 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.446 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.446 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.446 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.446 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.446 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.446 09:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.446 09:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.446 09:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.446 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.446 "name": "Existed_Raid", 00:15:05.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.446 "strip_size_kb": 64, 00:15:05.446 "state": "configuring", 00:15:05.446 "raid_level": "raid5f", 00:15:05.446 "superblock": false, 00:15:05.446 "num_base_bdevs": 3, 00:15:05.446 "num_base_bdevs_discovered": 2, 00:15:05.446 "num_base_bdevs_operational": 3, 00:15:05.446 "base_bdevs_list": [ 00:15:05.446 { 00:15:05.446 "name": "BaseBdev1", 00:15:05.446 "uuid": "b97e0493-f987-467e-ab11-ca6c17b59b27", 00:15:05.446 "is_configured": true, 00:15:05.446 "data_offset": 0, 00:15:05.446 "data_size": 65536 00:15:05.446 }, 00:15:05.446 { 00:15:05.446 "name": null, 00:15:05.446 "uuid": "5bec0551-1ccc-49ec-ae31-57a7c80ccbbe", 00:15:05.446 "is_configured": false, 00:15:05.446 "data_offset": 0, 00:15:05.446 "data_size": 65536 00:15:05.446 }, 00:15:05.446 { 00:15:05.446 "name": "BaseBdev3", 00:15:05.446 "uuid": "c5097ede-3118-480c-a8a0-96365c99b937", 00:15:05.446 "is_configured": true, 00:15:05.446 "data_offset": 0, 00:15:05.446 "data_size": 65536 00:15:05.446 } 00:15:05.446 ] 00:15:05.446 }' 00:15:05.446 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.446 09:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.704 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.704 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:05.704 09:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.704 09:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.704 09:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.704 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:05.704 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:05.704 09:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.704 09:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.704 [2024-10-15 09:14:23.595882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:05.704 09:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.704 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:05.704 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:05.704 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.704 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.704 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.963 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.963 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.963 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.963 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.963 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.963 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.963 09:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.963 09:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.963 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.963 09:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.963 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.963 "name": "Existed_Raid", 00:15:05.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.963 "strip_size_kb": 64, 00:15:05.963 "state": "configuring", 00:15:05.963 "raid_level": "raid5f", 00:15:05.963 "superblock": false, 00:15:05.963 "num_base_bdevs": 3, 00:15:05.963 "num_base_bdevs_discovered": 1, 00:15:05.963 "num_base_bdevs_operational": 3, 00:15:05.963 "base_bdevs_list": [ 00:15:05.963 { 00:15:05.963 "name": "BaseBdev1", 00:15:05.963 "uuid": "b97e0493-f987-467e-ab11-ca6c17b59b27", 00:15:05.963 "is_configured": true, 00:15:05.963 "data_offset": 0, 00:15:05.963 "data_size": 65536 00:15:05.963 }, 00:15:05.963 { 00:15:05.963 "name": null, 00:15:05.963 "uuid": "5bec0551-1ccc-49ec-ae31-57a7c80ccbbe", 00:15:05.963 "is_configured": false, 00:15:05.963 "data_offset": 0, 00:15:05.963 "data_size": 65536 00:15:05.963 }, 00:15:05.963 { 00:15:05.963 "name": null, 00:15:05.963 "uuid": "c5097ede-3118-480c-a8a0-96365c99b937", 00:15:05.963 "is_configured": false, 00:15:05.963 "data_offset": 0, 00:15:05.963 "data_size": 65536 00:15:05.963 } 00:15:05.963 ] 00:15:05.963 }' 00:15:05.963 09:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.963 09:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.222 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.222 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:06.222 09:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.222 09:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.222 09:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.222 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:06.222 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:06.222 09:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.222 09:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.222 [2024-10-15 09:14:24.115940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:06.481 09:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.481 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:06.481 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.481 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.481 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.481 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.481 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.481 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.481 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.481 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.481 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.481 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.481 09:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.481 09:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.481 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.481 09:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.481 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.481 "name": "Existed_Raid", 00:15:06.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.481 "strip_size_kb": 64, 00:15:06.481 "state": "configuring", 00:15:06.481 "raid_level": "raid5f", 00:15:06.481 "superblock": false, 00:15:06.481 "num_base_bdevs": 3, 00:15:06.481 "num_base_bdevs_discovered": 2, 00:15:06.481 "num_base_bdevs_operational": 3, 00:15:06.481 "base_bdevs_list": [ 00:15:06.481 { 00:15:06.481 "name": "BaseBdev1", 00:15:06.481 "uuid": "b97e0493-f987-467e-ab11-ca6c17b59b27", 00:15:06.481 "is_configured": true, 00:15:06.481 "data_offset": 0, 00:15:06.481 "data_size": 65536 00:15:06.481 }, 00:15:06.481 { 00:15:06.481 "name": null, 00:15:06.481 "uuid": "5bec0551-1ccc-49ec-ae31-57a7c80ccbbe", 00:15:06.481 "is_configured": false, 00:15:06.481 "data_offset": 0, 00:15:06.481 "data_size": 65536 00:15:06.481 }, 00:15:06.481 { 00:15:06.481 "name": "BaseBdev3", 00:15:06.481 "uuid": "c5097ede-3118-480c-a8a0-96365c99b937", 00:15:06.481 "is_configured": true, 00:15:06.481 "data_offset": 0, 00:15:06.481 "data_size": 65536 00:15:06.481 } 00:15:06.481 ] 00:15:06.481 }' 00:15:06.481 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.481 09:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.739 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:06.739 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.739 09:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.739 09:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.739 09:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.739 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:06.739 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:06.739 09:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.739 09:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.739 [2024-10-15 09:14:24.579247] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:06.998 09:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.998 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:06.998 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.998 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.998 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.998 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.998 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.999 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.999 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.999 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.999 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.999 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.999 09:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.999 09:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.999 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.999 09:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.999 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.999 "name": "Existed_Raid", 00:15:06.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.999 "strip_size_kb": 64, 00:15:06.999 "state": "configuring", 00:15:06.999 "raid_level": "raid5f", 00:15:06.999 "superblock": false, 00:15:06.999 "num_base_bdevs": 3, 00:15:06.999 "num_base_bdevs_discovered": 1, 00:15:06.999 "num_base_bdevs_operational": 3, 00:15:06.999 "base_bdevs_list": [ 00:15:06.999 { 00:15:06.999 "name": null, 00:15:06.999 "uuid": "b97e0493-f987-467e-ab11-ca6c17b59b27", 00:15:06.999 "is_configured": false, 00:15:06.999 "data_offset": 0, 00:15:06.999 "data_size": 65536 00:15:06.999 }, 00:15:06.999 { 00:15:06.999 "name": null, 00:15:06.999 "uuid": "5bec0551-1ccc-49ec-ae31-57a7c80ccbbe", 00:15:06.999 "is_configured": false, 00:15:06.999 "data_offset": 0, 00:15:06.999 "data_size": 65536 00:15:06.999 }, 00:15:06.999 { 00:15:06.999 "name": "BaseBdev3", 00:15:06.999 "uuid": "c5097ede-3118-480c-a8a0-96365c99b937", 00:15:06.999 "is_configured": true, 00:15:06.999 "data_offset": 0, 00:15:06.999 "data_size": 65536 00:15:06.999 } 00:15:06.999 ] 00:15:06.999 }' 00:15:06.999 09:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.999 09:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.567 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.567 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:07.567 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.567 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.567 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.567 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:07.567 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:07.567 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.567 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.567 [2024-10-15 09:14:25.209910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:07.567 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.567 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:07.567 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:07.567 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:07.567 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:07.567 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:07.567 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:07.567 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.567 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.567 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.567 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.567 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.568 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.568 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.568 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.568 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.568 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.568 "name": "Existed_Raid", 00:15:07.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.568 "strip_size_kb": 64, 00:15:07.568 "state": "configuring", 00:15:07.568 "raid_level": "raid5f", 00:15:07.568 "superblock": false, 00:15:07.568 "num_base_bdevs": 3, 00:15:07.568 "num_base_bdevs_discovered": 2, 00:15:07.568 "num_base_bdevs_operational": 3, 00:15:07.568 "base_bdevs_list": [ 00:15:07.568 { 00:15:07.568 "name": null, 00:15:07.568 "uuid": "b97e0493-f987-467e-ab11-ca6c17b59b27", 00:15:07.568 "is_configured": false, 00:15:07.568 "data_offset": 0, 00:15:07.568 "data_size": 65536 00:15:07.568 }, 00:15:07.568 { 00:15:07.568 "name": "BaseBdev2", 00:15:07.568 "uuid": "5bec0551-1ccc-49ec-ae31-57a7c80ccbbe", 00:15:07.568 "is_configured": true, 00:15:07.568 "data_offset": 0, 00:15:07.568 "data_size": 65536 00:15:07.568 }, 00:15:07.568 { 00:15:07.568 "name": "BaseBdev3", 00:15:07.568 "uuid": "c5097ede-3118-480c-a8a0-96365c99b937", 00:15:07.568 "is_configured": true, 00:15:07.568 "data_offset": 0, 00:15:07.568 "data_size": 65536 00:15:07.568 } 00:15:07.568 ] 00:15:07.568 }' 00:15:07.568 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.568 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.827 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.827 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:07.827 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.827 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.827 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.827 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:07.827 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.827 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.827 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.827 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:07.827 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.085 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b97e0493-f987-467e-ab11-ca6c17b59b27 00:15:08.085 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.085 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.085 [2024-10-15 09:14:25.797886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:08.085 [2024-10-15 09:14:25.798056] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:08.085 [2024-10-15 09:14:25.798091] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:08.085 [2024-10-15 09:14:25.798429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:08.085 [2024-10-15 09:14:25.805214] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:08.085 [2024-10-15 09:14:25.805328] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:08.085 [2024-10-15 09:14:25.805816] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.085 NewBaseBdev 00:15:08.085 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.085 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:08.085 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:08.085 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:08.085 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:08.085 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:08.085 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:08.085 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:08.086 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.086 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.086 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.086 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:08.086 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.086 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.086 [ 00:15:08.086 { 00:15:08.086 "name": "NewBaseBdev", 00:15:08.086 "aliases": [ 00:15:08.086 "b97e0493-f987-467e-ab11-ca6c17b59b27" 00:15:08.086 ], 00:15:08.086 "product_name": "Malloc disk", 00:15:08.086 "block_size": 512, 00:15:08.086 "num_blocks": 65536, 00:15:08.086 "uuid": "b97e0493-f987-467e-ab11-ca6c17b59b27", 00:15:08.086 "assigned_rate_limits": { 00:15:08.086 "rw_ios_per_sec": 0, 00:15:08.086 "rw_mbytes_per_sec": 0, 00:15:08.086 "r_mbytes_per_sec": 0, 00:15:08.086 "w_mbytes_per_sec": 0 00:15:08.086 }, 00:15:08.086 "claimed": true, 00:15:08.086 "claim_type": "exclusive_write", 00:15:08.086 "zoned": false, 00:15:08.086 "supported_io_types": { 00:15:08.086 "read": true, 00:15:08.086 "write": true, 00:15:08.086 "unmap": true, 00:15:08.086 "flush": true, 00:15:08.086 "reset": true, 00:15:08.086 "nvme_admin": false, 00:15:08.086 "nvme_io": false, 00:15:08.086 "nvme_io_md": false, 00:15:08.086 "write_zeroes": true, 00:15:08.086 "zcopy": true, 00:15:08.086 "get_zone_info": false, 00:15:08.086 "zone_management": false, 00:15:08.086 "zone_append": false, 00:15:08.086 "compare": false, 00:15:08.086 "compare_and_write": false, 00:15:08.086 "abort": true, 00:15:08.086 "seek_hole": false, 00:15:08.086 "seek_data": false, 00:15:08.086 "copy": true, 00:15:08.086 "nvme_iov_md": false 00:15:08.086 }, 00:15:08.086 "memory_domains": [ 00:15:08.086 { 00:15:08.086 "dma_device_id": "system", 00:15:08.086 "dma_device_type": 1 00:15:08.086 }, 00:15:08.086 { 00:15:08.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.086 "dma_device_type": 2 00:15:08.086 } 00:15:08.086 ], 00:15:08.086 "driver_specific": {} 00:15:08.086 } 00:15:08.086 ] 00:15:08.086 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.086 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:08.086 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:08.086 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:08.086 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.086 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:08.086 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.086 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:08.086 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.086 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.086 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.086 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.086 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.086 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.086 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.086 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.086 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.086 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.086 "name": "Existed_Raid", 00:15:08.086 "uuid": "d03350c4-9938-4980-b7d6-23286afaf5fd", 00:15:08.086 "strip_size_kb": 64, 00:15:08.086 "state": "online", 00:15:08.086 "raid_level": "raid5f", 00:15:08.086 "superblock": false, 00:15:08.086 "num_base_bdevs": 3, 00:15:08.086 "num_base_bdevs_discovered": 3, 00:15:08.086 "num_base_bdevs_operational": 3, 00:15:08.086 "base_bdevs_list": [ 00:15:08.086 { 00:15:08.086 "name": "NewBaseBdev", 00:15:08.086 "uuid": "b97e0493-f987-467e-ab11-ca6c17b59b27", 00:15:08.086 "is_configured": true, 00:15:08.086 "data_offset": 0, 00:15:08.086 "data_size": 65536 00:15:08.086 }, 00:15:08.086 { 00:15:08.086 "name": "BaseBdev2", 00:15:08.086 "uuid": "5bec0551-1ccc-49ec-ae31-57a7c80ccbbe", 00:15:08.086 "is_configured": true, 00:15:08.086 "data_offset": 0, 00:15:08.086 "data_size": 65536 00:15:08.086 }, 00:15:08.086 { 00:15:08.086 "name": "BaseBdev3", 00:15:08.086 "uuid": "c5097ede-3118-480c-a8a0-96365c99b937", 00:15:08.086 "is_configured": true, 00:15:08.086 "data_offset": 0, 00:15:08.086 "data_size": 65536 00:15:08.086 } 00:15:08.086 ] 00:15:08.086 }' 00:15:08.086 09:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.086 09:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.653 09:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:08.653 09:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:08.653 09:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:08.653 09:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:08.653 09:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:08.653 09:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:08.653 09:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:08.653 09:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:08.653 09:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.653 09:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.653 [2024-10-15 09:14:26.337843] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:08.653 09:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.653 09:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:08.653 "name": "Existed_Raid", 00:15:08.653 "aliases": [ 00:15:08.653 "d03350c4-9938-4980-b7d6-23286afaf5fd" 00:15:08.653 ], 00:15:08.653 "product_name": "Raid Volume", 00:15:08.653 "block_size": 512, 00:15:08.653 "num_blocks": 131072, 00:15:08.653 "uuid": "d03350c4-9938-4980-b7d6-23286afaf5fd", 00:15:08.654 "assigned_rate_limits": { 00:15:08.654 "rw_ios_per_sec": 0, 00:15:08.654 "rw_mbytes_per_sec": 0, 00:15:08.654 "r_mbytes_per_sec": 0, 00:15:08.654 "w_mbytes_per_sec": 0 00:15:08.654 }, 00:15:08.654 "claimed": false, 00:15:08.654 "zoned": false, 00:15:08.654 "supported_io_types": { 00:15:08.654 "read": true, 00:15:08.654 "write": true, 00:15:08.654 "unmap": false, 00:15:08.654 "flush": false, 00:15:08.654 "reset": true, 00:15:08.654 "nvme_admin": false, 00:15:08.654 "nvme_io": false, 00:15:08.654 "nvme_io_md": false, 00:15:08.654 "write_zeroes": true, 00:15:08.654 "zcopy": false, 00:15:08.654 "get_zone_info": false, 00:15:08.654 "zone_management": false, 00:15:08.654 "zone_append": false, 00:15:08.654 "compare": false, 00:15:08.654 "compare_and_write": false, 00:15:08.654 "abort": false, 00:15:08.654 "seek_hole": false, 00:15:08.654 "seek_data": false, 00:15:08.654 "copy": false, 00:15:08.654 "nvme_iov_md": false 00:15:08.654 }, 00:15:08.654 "driver_specific": { 00:15:08.654 "raid": { 00:15:08.654 "uuid": "d03350c4-9938-4980-b7d6-23286afaf5fd", 00:15:08.654 "strip_size_kb": 64, 00:15:08.654 "state": "online", 00:15:08.654 "raid_level": "raid5f", 00:15:08.654 "superblock": false, 00:15:08.654 "num_base_bdevs": 3, 00:15:08.654 "num_base_bdevs_discovered": 3, 00:15:08.654 "num_base_bdevs_operational": 3, 00:15:08.654 "base_bdevs_list": [ 00:15:08.654 { 00:15:08.654 "name": "NewBaseBdev", 00:15:08.654 "uuid": "b97e0493-f987-467e-ab11-ca6c17b59b27", 00:15:08.654 "is_configured": true, 00:15:08.654 "data_offset": 0, 00:15:08.654 "data_size": 65536 00:15:08.654 }, 00:15:08.654 { 00:15:08.654 "name": "BaseBdev2", 00:15:08.654 "uuid": "5bec0551-1ccc-49ec-ae31-57a7c80ccbbe", 00:15:08.654 "is_configured": true, 00:15:08.654 "data_offset": 0, 00:15:08.654 "data_size": 65536 00:15:08.654 }, 00:15:08.654 { 00:15:08.654 "name": "BaseBdev3", 00:15:08.654 "uuid": "c5097ede-3118-480c-a8a0-96365c99b937", 00:15:08.654 "is_configured": true, 00:15:08.654 "data_offset": 0, 00:15:08.654 "data_size": 65536 00:15:08.654 } 00:15:08.654 ] 00:15:08.654 } 00:15:08.654 } 00:15:08.654 }' 00:15:08.654 09:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:08.654 09:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:08.654 BaseBdev2 00:15:08.654 BaseBdev3' 00:15:08.654 09:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.654 09:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:08.654 09:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.654 09:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:08.654 09:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.654 09:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.654 09:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.654 09:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.654 09:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.654 09:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.654 09:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.654 09:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:08.654 09:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.654 09:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.654 09:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.654 09:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.913 09:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.913 09:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.913 09:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.913 09:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:08.913 09:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.913 09:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.913 09:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.913 09:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.913 09:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.913 09:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.913 09:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:08.913 09:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.913 09:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.913 [2024-10-15 09:14:26.637607] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:08.913 [2024-10-15 09:14:26.637665] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:08.913 [2024-10-15 09:14:26.637790] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:08.913 [2024-10-15 09:14:26.638146] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:08.913 [2024-10-15 09:14:26.638175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:08.913 09:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.913 09:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80127 00:15:08.913 09:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 80127 ']' 00:15:08.913 09:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 80127 00:15:08.913 09:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:15:08.913 09:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:08.913 09:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80127 00:15:08.913 09:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:08.913 killing process with pid 80127 00:15:08.913 09:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:08.913 09:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80127' 00:15:08.913 09:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 80127 00:15:08.913 [2024-10-15 09:14:26.682126] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:08.913 09:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 80127 00:15:09.172 [2024-10-15 09:14:27.052602] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:10.547 09:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:10.547 00:15:10.547 real 0m11.435s 00:15:10.547 user 0m17.741s 00:15:10.547 sys 0m2.101s 00:15:10.547 09:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:10.547 09:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.547 ************************************ 00:15:10.547 END TEST raid5f_state_function_test 00:15:10.547 ************************************ 00:15:10.805 09:14:28 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:15:10.805 09:14:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:10.805 09:14:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:10.805 09:14:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:10.805 ************************************ 00:15:10.805 START TEST raid5f_state_function_test_sb 00:15:10.806 ************************************ 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80754 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80754' 00:15:10.806 Process raid pid: 80754 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80754 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 80754 ']' 00:15:10.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:10.806 09:14:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.806 [2024-10-15 09:14:28.592984] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:15:10.806 [2024-10-15 09:14:28.593126] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.065 [2024-10-15 09:14:28.766537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.065 [2024-10-15 09:14:28.908112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.324 [2024-10-15 09:14:29.155697] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:11.324 [2024-10-15 09:14:29.155864] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:11.891 09:14:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:11.891 09:14:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:11.891 09:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:11.891 09:14:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.891 09:14:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.891 [2024-10-15 09:14:29.488957] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:11.891 [2024-10-15 09:14:29.489043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:11.891 [2024-10-15 09:14:29.489055] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:11.891 [2024-10-15 09:14:29.489067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:11.891 [2024-10-15 09:14:29.489075] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:11.891 [2024-10-15 09:14:29.489086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:11.891 09:14:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.891 09:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:11.891 09:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:11.891 09:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:11.891 09:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.891 09:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.891 09:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.891 09:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.891 09:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.891 09:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.891 09:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.891 09:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.891 09:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.891 09:14:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.891 09:14:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.891 09:14:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.891 09:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.891 "name": "Existed_Raid", 00:15:11.892 "uuid": "1a05c182-c80a-4b0c-98ba-3955d95c321c", 00:15:11.892 "strip_size_kb": 64, 00:15:11.892 "state": "configuring", 00:15:11.892 "raid_level": "raid5f", 00:15:11.892 "superblock": true, 00:15:11.892 "num_base_bdevs": 3, 00:15:11.892 "num_base_bdevs_discovered": 0, 00:15:11.892 "num_base_bdevs_operational": 3, 00:15:11.892 "base_bdevs_list": [ 00:15:11.892 { 00:15:11.892 "name": "BaseBdev1", 00:15:11.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.892 "is_configured": false, 00:15:11.892 "data_offset": 0, 00:15:11.892 "data_size": 0 00:15:11.892 }, 00:15:11.892 { 00:15:11.892 "name": "BaseBdev2", 00:15:11.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.892 "is_configured": false, 00:15:11.892 "data_offset": 0, 00:15:11.892 "data_size": 0 00:15:11.892 }, 00:15:11.892 { 00:15:11.892 "name": "BaseBdev3", 00:15:11.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.892 "is_configured": false, 00:15:11.892 "data_offset": 0, 00:15:11.892 "data_size": 0 00:15:11.892 } 00:15:11.892 ] 00:15:11.892 }' 00:15:11.892 09:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.892 09:14:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.151 09:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:12.151 09:14:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.151 09:14:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.151 [2024-10-15 09:14:29.944903] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:12.151 [2024-10-15 09:14:29.944966] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:12.151 09:14:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.151 09:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:12.151 09:14:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.151 09:14:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.151 [2024-10-15 09:14:29.956979] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:12.151 [2024-10-15 09:14:29.957057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:12.151 [2024-10-15 09:14:29.957068] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:12.151 [2024-10-15 09:14:29.957080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:12.151 [2024-10-15 09:14:29.957088] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:12.151 [2024-10-15 09:14:29.957099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:12.151 09:14:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.151 09:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:12.151 09:14:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.151 09:14:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.151 [2024-10-15 09:14:30.013272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:12.151 BaseBdev1 00:15:12.151 09:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.151 09:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:12.151 09:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:12.151 09:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:12.151 09:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:12.151 09:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:12.151 09:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:12.151 09:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:12.151 09:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.151 09:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.151 09:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.151 09:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:12.151 09:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.151 09:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.151 [ 00:15:12.151 { 00:15:12.151 "name": "BaseBdev1", 00:15:12.151 "aliases": [ 00:15:12.151 "ad3a451f-5376-4e5b-9b5d-addf08c4f73b" 00:15:12.151 ], 00:15:12.151 "product_name": "Malloc disk", 00:15:12.151 "block_size": 512, 00:15:12.151 "num_blocks": 65536, 00:15:12.151 "uuid": "ad3a451f-5376-4e5b-9b5d-addf08c4f73b", 00:15:12.151 "assigned_rate_limits": { 00:15:12.151 "rw_ios_per_sec": 0, 00:15:12.151 "rw_mbytes_per_sec": 0, 00:15:12.151 "r_mbytes_per_sec": 0, 00:15:12.151 "w_mbytes_per_sec": 0 00:15:12.151 }, 00:15:12.151 "claimed": true, 00:15:12.151 "claim_type": "exclusive_write", 00:15:12.151 "zoned": false, 00:15:12.151 "supported_io_types": { 00:15:12.151 "read": true, 00:15:12.151 "write": true, 00:15:12.151 "unmap": true, 00:15:12.151 "flush": true, 00:15:12.151 "reset": true, 00:15:12.151 "nvme_admin": false, 00:15:12.151 "nvme_io": false, 00:15:12.151 "nvme_io_md": false, 00:15:12.151 "write_zeroes": true, 00:15:12.151 "zcopy": true, 00:15:12.151 "get_zone_info": false, 00:15:12.151 "zone_management": false, 00:15:12.151 "zone_append": false, 00:15:12.151 "compare": false, 00:15:12.151 "compare_and_write": false, 00:15:12.151 "abort": true, 00:15:12.151 "seek_hole": false, 00:15:12.151 "seek_data": false, 00:15:12.151 "copy": true, 00:15:12.151 "nvme_iov_md": false 00:15:12.151 }, 00:15:12.151 "memory_domains": [ 00:15:12.151 { 00:15:12.151 "dma_device_id": "system", 00:15:12.151 "dma_device_type": 1 00:15:12.151 }, 00:15:12.151 { 00:15:12.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.151 "dma_device_type": 2 00:15:12.151 } 00:15:12.151 ], 00:15:12.151 "driver_specific": {} 00:15:12.151 } 00:15:12.151 ] 00:15:12.151 09:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.151 09:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:12.151 09:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:12.151 09:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:12.151 09:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:12.151 09:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.151 09:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.151 09:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.151 09:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.151 09:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.151 09:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.151 09:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.410 09:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.410 09:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.410 09:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.410 09:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.410 09:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.410 09:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.410 "name": "Existed_Raid", 00:15:12.410 "uuid": "43754e8b-5971-4eb3-a7ef-680e33d26fce", 00:15:12.410 "strip_size_kb": 64, 00:15:12.410 "state": "configuring", 00:15:12.410 "raid_level": "raid5f", 00:15:12.410 "superblock": true, 00:15:12.410 "num_base_bdevs": 3, 00:15:12.410 "num_base_bdevs_discovered": 1, 00:15:12.410 "num_base_bdevs_operational": 3, 00:15:12.410 "base_bdevs_list": [ 00:15:12.410 { 00:15:12.410 "name": "BaseBdev1", 00:15:12.410 "uuid": "ad3a451f-5376-4e5b-9b5d-addf08c4f73b", 00:15:12.410 "is_configured": true, 00:15:12.410 "data_offset": 2048, 00:15:12.410 "data_size": 63488 00:15:12.410 }, 00:15:12.410 { 00:15:12.410 "name": "BaseBdev2", 00:15:12.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.410 "is_configured": false, 00:15:12.410 "data_offset": 0, 00:15:12.410 "data_size": 0 00:15:12.410 }, 00:15:12.410 { 00:15:12.410 "name": "BaseBdev3", 00:15:12.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.410 "is_configured": false, 00:15:12.410 "data_offset": 0, 00:15:12.410 "data_size": 0 00:15:12.410 } 00:15:12.410 ] 00:15:12.411 }' 00:15:12.411 09:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.411 09:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.669 09:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:12.669 09:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.669 09:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.669 [2024-10-15 09:14:30.552882] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:12.669 [2024-10-15 09:14:30.553070] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:12.669 09:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.669 09:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:12.669 09:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.669 09:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.669 [2024-10-15 09:14:30.565002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:12.929 [2024-10-15 09:14:30.567379] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:12.929 [2024-10-15 09:14:30.567514] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:12.929 [2024-10-15 09:14:30.567555] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:12.929 [2024-10-15 09:14:30.567600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:12.929 09:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.929 09:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:12.929 09:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:12.929 09:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:12.929 09:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:12.929 09:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:12.929 09:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.929 09:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.929 09:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.929 09:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.929 09:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.929 09:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.929 09:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.929 09:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.929 09:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.929 09:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.929 09:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.929 09:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.929 09:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.929 "name": "Existed_Raid", 00:15:12.929 "uuid": "b62f64f3-515c-4be8-947c-c8ad64bca4d6", 00:15:12.929 "strip_size_kb": 64, 00:15:12.929 "state": "configuring", 00:15:12.929 "raid_level": "raid5f", 00:15:12.929 "superblock": true, 00:15:12.929 "num_base_bdevs": 3, 00:15:12.929 "num_base_bdevs_discovered": 1, 00:15:12.929 "num_base_bdevs_operational": 3, 00:15:12.929 "base_bdevs_list": [ 00:15:12.929 { 00:15:12.929 "name": "BaseBdev1", 00:15:12.929 "uuid": "ad3a451f-5376-4e5b-9b5d-addf08c4f73b", 00:15:12.929 "is_configured": true, 00:15:12.929 "data_offset": 2048, 00:15:12.929 "data_size": 63488 00:15:12.929 }, 00:15:12.929 { 00:15:12.929 "name": "BaseBdev2", 00:15:12.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.929 "is_configured": false, 00:15:12.929 "data_offset": 0, 00:15:12.929 "data_size": 0 00:15:12.929 }, 00:15:12.929 { 00:15:12.929 "name": "BaseBdev3", 00:15:12.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.929 "is_configured": false, 00:15:12.929 "data_offset": 0, 00:15:12.929 "data_size": 0 00:15:12.929 } 00:15:12.930 ] 00:15:12.930 }' 00:15:12.930 09:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.930 09:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.189 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:13.189 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.189 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.189 [2024-10-15 09:14:31.051229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:13.189 BaseBdev2 00:15:13.189 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.189 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:13.190 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:13.190 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:13.190 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:13.190 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:13.190 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:13.190 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:13.190 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.190 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.190 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.190 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:13.190 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.190 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.190 [ 00:15:13.190 { 00:15:13.190 "name": "BaseBdev2", 00:15:13.190 "aliases": [ 00:15:13.190 "21d5c274-3fbf-4e35-aec8-5bf2924748c8" 00:15:13.190 ], 00:15:13.190 "product_name": "Malloc disk", 00:15:13.190 "block_size": 512, 00:15:13.190 "num_blocks": 65536, 00:15:13.190 "uuid": "21d5c274-3fbf-4e35-aec8-5bf2924748c8", 00:15:13.190 "assigned_rate_limits": { 00:15:13.190 "rw_ios_per_sec": 0, 00:15:13.190 "rw_mbytes_per_sec": 0, 00:15:13.190 "r_mbytes_per_sec": 0, 00:15:13.190 "w_mbytes_per_sec": 0 00:15:13.190 }, 00:15:13.190 "claimed": true, 00:15:13.190 "claim_type": "exclusive_write", 00:15:13.190 "zoned": false, 00:15:13.190 "supported_io_types": { 00:15:13.190 "read": true, 00:15:13.190 "write": true, 00:15:13.190 "unmap": true, 00:15:13.190 "flush": true, 00:15:13.190 "reset": true, 00:15:13.190 "nvme_admin": false, 00:15:13.190 "nvme_io": false, 00:15:13.190 "nvme_io_md": false, 00:15:13.190 "write_zeroes": true, 00:15:13.190 "zcopy": true, 00:15:13.190 "get_zone_info": false, 00:15:13.190 "zone_management": false, 00:15:13.190 "zone_append": false, 00:15:13.190 "compare": false, 00:15:13.190 "compare_and_write": false, 00:15:13.190 "abort": true, 00:15:13.190 "seek_hole": false, 00:15:13.190 "seek_data": false, 00:15:13.190 "copy": true, 00:15:13.190 "nvme_iov_md": false 00:15:13.190 }, 00:15:13.190 "memory_domains": [ 00:15:13.190 { 00:15:13.190 "dma_device_id": "system", 00:15:13.190 "dma_device_type": 1 00:15:13.190 }, 00:15:13.190 { 00:15:13.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.190 "dma_device_type": 2 00:15:13.190 } 00:15:13.190 ], 00:15:13.190 "driver_specific": {} 00:15:13.190 } 00:15:13.190 ] 00:15:13.190 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.190 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:13.190 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:13.190 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:13.190 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:13.190 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:13.190 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:13.190 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.190 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.190 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:13.190 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.190 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.190 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.190 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.450 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.450 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.450 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.450 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.450 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.450 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.450 "name": "Existed_Raid", 00:15:13.450 "uuid": "b62f64f3-515c-4be8-947c-c8ad64bca4d6", 00:15:13.450 "strip_size_kb": 64, 00:15:13.450 "state": "configuring", 00:15:13.450 "raid_level": "raid5f", 00:15:13.450 "superblock": true, 00:15:13.450 "num_base_bdevs": 3, 00:15:13.450 "num_base_bdevs_discovered": 2, 00:15:13.450 "num_base_bdevs_operational": 3, 00:15:13.450 "base_bdevs_list": [ 00:15:13.450 { 00:15:13.450 "name": "BaseBdev1", 00:15:13.450 "uuid": "ad3a451f-5376-4e5b-9b5d-addf08c4f73b", 00:15:13.450 "is_configured": true, 00:15:13.450 "data_offset": 2048, 00:15:13.450 "data_size": 63488 00:15:13.450 }, 00:15:13.450 { 00:15:13.450 "name": "BaseBdev2", 00:15:13.450 "uuid": "21d5c274-3fbf-4e35-aec8-5bf2924748c8", 00:15:13.450 "is_configured": true, 00:15:13.450 "data_offset": 2048, 00:15:13.450 "data_size": 63488 00:15:13.450 }, 00:15:13.450 { 00:15:13.450 "name": "BaseBdev3", 00:15:13.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.450 "is_configured": false, 00:15:13.450 "data_offset": 0, 00:15:13.450 "data_size": 0 00:15:13.450 } 00:15:13.450 ] 00:15:13.450 }' 00:15:13.450 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.450 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.710 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:13.710 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.710 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.710 [2024-10-15 09:14:31.566803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:13.710 [2024-10-15 09:14:31.567234] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:13.710 [2024-10-15 09:14:31.567307] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:13.710 [2024-10-15 09:14:31.567660] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:13.710 BaseBdev3 00:15:13.710 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.710 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:13.710 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:13.710 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:13.710 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:13.710 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:13.710 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:13.710 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:13.710 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.710 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.710 [2024-10-15 09:14:31.574656] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:13.710 [2024-10-15 09:14:31.574811] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:13.710 [2024-10-15 09:14:31.575125] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.710 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.710 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:13.710 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.710 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.710 [ 00:15:13.710 { 00:15:13.710 "name": "BaseBdev3", 00:15:13.710 "aliases": [ 00:15:13.710 "733d684e-588e-40a7-bffd-c4dfd6ae2937" 00:15:13.710 ], 00:15:13.710 "product_name": "Malloc disk", 00:15:13.710 "block_size": 512, 00:15:13.710 "num_blocks": 65536, 00:15:13.710 "uuid": "733d684e-588e-40a7-bffd-c4dfd6ae2937", 00:15:13.710 "assigned_rate_limits": { 00:15:13.710 "rw_ios_per_sec": 0, 00:15:13.710 "rw_mbytes_per_sec": 0, 00:15:13.710 "r_mbytes_per_sec": 0, 00:15:13.710 "w_mbytes_per_sec": 0 00:15:13.710 }, 00:15:13.710 "claimed": true, 00:15:13.710 "claim_type": "exclusive_write", 00:15:13.710 "zoned": false, 00:15:13.710 "supported_io_types": { 00:15:13.710 "read": true, 00:15:13.710 "write": true, 00:15:13.710 "unmap": true, 00:15:13.710 "flush": true, 00:15:13.710 "reset": true, 00:15:13.710 "nvme_admin": false, 00:15:13.710 "nvme_io": false, 00:15:13.710 "nvme_io_md": false, 00:15:13.970 "write_zeroes": true, 00:15:13.970 "zcopy": true, 00:15:13.970 "get_zone_info": false, 00:15:13.970 "zone_management": false, 00:15:13.970 "zone_append": false, 00:15:13.970 "compare": false, 00:15:13.970 "compare_and_write": false, 00:15:13.970 "abort": true, 00:15:13.970 "seek_hole": false, 00:15:13.970 "seek_data": false, 00:15:13.970 "copy": true, 00:15:13.970 "nvme_iov_md": false 00:15:13.970 }, 00:15:13.970 "memory_domains": [ 00:15:13.970 { 00:15:13.970 "dma_device_id": "system", 00:15:13.970 "dma_device_type": 1 00:15:13.970 }, 00:15:13.970 { 00:15:13.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.970 "dma_device_type": 2 00:15:13.970 } 00:15:13.970 ], 00:15:13.970 "driver_specific": {} 00:15:13.970 } 00:15:13.970 ] 00:15:13.970 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.970 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:13.970 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:13.970 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:13.970 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:13.970 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:13.970 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.970 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.970 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.970 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:13.970 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.970 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.970 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.970 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.970 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.970 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.970 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.970 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.970 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.970 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.970 "name": "Existed_Raid", 00:15:13.970 "uuid": "b62f64f3-515c-4be8-947c-c8ad64bca4d6", 00:15:13.970 "strip_size_kb": 64, 00:15:13.970 "state": "online", 00:15:13.970 "raid_level": "raid5f", 00:15:13.970 "superblock": true, 00:15:13.970 "num_base_bdevs": 3, 00:15:13.970 "num_base_bdevs_discovered": 3, 00:15:13.970 "num_base_bdevs_operational": 3, 00:15:13.970 "base_bdevs_list": [ 00:15:13.970 { 00:15:13.970 "name": "BaseBdev1", 00:15:13.970 "uuid": "ad3a451f-5376-4e5b-9b5d-addf08c4f73b", 00:15:13.970 "is_configured": true, 00:15:13.970 "data_offset": 2048, 00:15:13.970 "data_size": 63488 00:15:13.970 }, 00:15:13.970 { 00:15:13.970 "name": "BaseBdev2", 00:15:13.970 "uuid": "21d5c274-3fbf-4e35-aec8-5bf2924748c8", 00:15:13.970 "is_configured": true, 00:15:13.970 "data_offset": 2048, 00:15:13.970 "data_size": 63488 00:15:13.970 }, 00:15:13.970 { 00:15:13.970 "name": "BaseBdev3", 00:15:13.970 "uuid": "733d684e-588e-40a7-bffd-c4dfd6ae2937", 00:15:13.970 "is_configured": true, 00:15:13.970 "data_offset": 2048, 00:15:13.970 "data_size": 63488 00:15:13.970 } 00:15:13.970 ] 00:15:13.970 }' 00:15:13.970 09:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.970 09:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.294 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:14.294 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:14.294 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:14.294 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:14.294 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:14.294 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:14.294 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:14.294 09:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.294 09:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.294 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:14.294 [2024-10-15 09:14:32.091011] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:14.294 09:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.294 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:14.294 "name": "Existed_Raid", 00:15:14.294 "aliases": [ 00:15:14.294 "b62f64f3-515c-4be8-947c-c8ad64bca4d6" 00:15:14.294 ], 00:15:14.294 "product_name": "Raid Volume", 00:15:14.294 "block_size": 512, 00:15:14.294 "num_blocks": 126976, 00:15:14.294 "uuid": "b62f64f3-515c-4be8-947c-c8ad64bca4d6", 00:15:14.294 "assigned_rate_limits": { 00:15:14.294 "rw_ios_per_sec": 0, 00:15:14.294 "rw_mbytes_per_sec": 0, 00:15:14.294 "r_mbytes_per_sec": 0, 00:15:14.294 "w_mbytes_per_sec": 0 00:15:14.294 }, 00:15:14.294 "claimed": false, 00:15:14.294 "zoned": false, 00:15:14.294 "supported_io_types": { 00:15:14.294 "read": true, 00:15:14.294 "write": true, 00:15:14.294 "unmap": false, 00:15:14.294 "flush": false, 00:15:14.294 "reset": true, 00:15:14.294 "nvme_admin": false, 00:15:14.294 "nvme_io": false, 00:15:14.294 "nvme_io_md": false, 00:15:14.294 "write_zeroes": true, 00:15:14.294 "zcopy": false, 00:15:14.294 "get_zone_info": false, 00:15:14.294 "zone_management": false, 00:15:14.294 "zone_append": false, 00:15:14.294 "compare": false, 00:15:14.294 "compare_and_write": false, 00:15:14.294 "abort": false, 00:15:14.294 "seek_hole": false, 00:15:14.294 "seek_data": false, 00:15:14.294 "copy": false, 00:15:14.294 "nvme_iov_md": false 00:15:14.294 }, 00:15:14.294 "driver_specific": { 00:15:14.294 "raid": { 00:15:14.294 "uuid": "b62f64f3-515c-4be8-947c-c8ad64bca4d6", 00:15:14.294 "strip_size_kb": 64, 00:15:14.294 "state": "online", 00:15:14.294 "raid_level": "raid5f", 00:15:14.294 "superblock": true, 00:15:14.294 "num_base_bdevs": 3, 00:15:14.294 "num_base_bdevs_discovered": 3, 00:15:14.294 "num_base_bdevs_operational": 3, 00:15:14.294 "base_bdevs_list": [ 00:15:14.294 { 00:15:14.294 "name": "BaseBdev1", 00:15:14.294 "uuid": "ad3a451f-5376-4e5b-9b5d-addf08c4f73b", 00:15:14.294 "is_configured": true, 00:15:14.294 "data_offset": 2048, 00:15:14.294 "data_size": 63488 00:15:14.294 }, 00:15:14.294 { 00:15:14.294 "name": "BaseBdev2", 00:15:14.294 "uuid": "21d5c274-3fbf-4e35-aec8-5bf2924748c8", 00:15:14.294 "is_configured": true, 00:15:14.294 "data_offset": 2048, 00:15:14.294 "data_size": 63488 00:15:14.294 }, 00:15:14.294 { 00:15:14.294 "name": "BaseBdev3", 00:15:14.294 "uuid": "733d684e-588e-40a7-bffd-c4dfd6ae2937", 00:15:14.294 "is_configured": true, 00:15:14.294 "data_offset": 2048, 00:15:14.294 "data_size": 63488 00:15:14.294 } 00:15:14.294 ] 00:15:14.294 } 00:15:14.294 } 00:15:14.294 }' 00:15:14.294 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:14.569 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:14.569 BaseBdev2 00:15:14.569 BaseBdev3' 00:15:14.569 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:14.569 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:14.569 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:14.569 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:14.569 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:14.569 09:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.569 09:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.569 09:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.569 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:14.569 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:14.569 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:14.569 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:14.569 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:14.569 09:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.569 09:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.569 09:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.569 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:14.569 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:14.569 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:14.569 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:14.569 09:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.569 09:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.569 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:14.569 09:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.569 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:14.569 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:14.569 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:14.569 09:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.569 09:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.569 [2024-10-15 09:14:32.390264] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:14.828 09:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.828 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:14.828 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:14.828 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:14.828 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:14.828 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:14.828 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:14.828 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.828 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.828 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.828 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.828 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:14.828 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.828 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.828 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.828 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.828 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.828 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.828 09:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.828 09:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.828 09:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.828 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.828 "name": "Existed_Raid", 00:15:14.828 "uuid": "b62f64f3-515c-4be8-947c-c8ad64bca4d6", 00:15:14.828 "strip_size_kb": 64, 00:15:14.828 "state": "online", 00:15:14.828 "raid_level": "raid5f", 00:15:14.828 "superblock": true, 00:15:14.828 "num_base_bdevs": 3, 00:15:14.828 "num_base_bdevs_discovered": 2, 00:15:14.828 "num_base_bdevs_operational": 2, 00:15:14.828 "base_bdevs_list": [ 00:15:14.828 { 00:15:14.828 "name": null, 00:15:14.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.828 "is_configured": false, 00:15:14.828 "data_offset": 0, 00:15:14.828 "data_size": 63488 00:15:14.828 }, 00:15:14.828 { 00:15:14.828 "name": "BaseBdev2", 00:15:14.828 "uuid": "21d5c274-3fbf-4e35-aec8-5bf2924748c8", 00:15:14.828 "is_configured": true, 00:15:14.828 "data_offset": 2048, 00:15:14.828 "data_size": 63488 00:15:14.828 }, 00:15:14.828 { 00:15:14.828 "name": "BaseBdev3", 00:15:14.828 "uuid": "733d684e-588e-40a7-bffd-c4dfd6ae2937", 00:15:14.828 "is_configured": true, 00:15:14.828 "data_offset": 2048, 00:15:14.828 "data_size": 63488 00:15:14.828 } 00:15:14.828 ] 00:15:14.828 }' 00:15:14.828 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.828 09:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.086 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:15.086 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:15.086 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.086 09:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:15.086 09:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.086 09:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.086 09:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.345 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:15.345 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:15.345 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:15.345 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.345 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.345 [2024-10-15 09:14:33.016008] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:15.345 [2024-10-15 09:14:33.016380] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:15.345 [2024-10-15 09:14:33.132088] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:15.345 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.345 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:15.345 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:15.345 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:15.345 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.345 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.345 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.345 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.345 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:15.345 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:15.345 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:15.345 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.345 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.345 [2024-10-15 09:14:33.192087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:15.345 [2024-10-15 09:14:33.192273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.603 BaseBdev2 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.603 [ 00:15:15.603 { 00:15:15.603 "name": "BaseBdev2", 00:15:15.603 "aliases": [ 00:15:15.603 "26808237-cb40-48d5-8c77-4af2a80df168" 00:15:15.603 ], 00:15:15.603 "product_name": "Malloc disk", 00:15:15.603 "block_size": 512, 00:15:15.603 "num_blocks": 65536, 00:15:15.603 "uuid": "26808237-cb40-48d5-8c77-4af2a80df168", 00:15:15.603 "assigned_rate_limits": { 00:15:15.603 "rw_ios_per_sec": 0, 00:15:15.603 "rw_mbytes_per_sec": 0, 00:15:15.603 "r_mbytes_per_sec": 0, 00:15:15.603 "w_mbytes_per_sec": 0 00:15:15.603 }, 00:15:15.603 "claimed": false, 00:15:15.603 "zoned": false, 00:15:15.603 "supported_io_types": { 00:15:15.603 "read": true, 00:15:15.603 "write": true, 00:15:15.603 "unmap": true, 00:15:15.603 "flush": true, 00:15:15.603 "reset": true, 00:15:15.603 "nvme_admin": false, 00:15:15.603 "nvme_io": false, 00:15:15.603 "nvme_io_md": false, 00:15:15.603 "write_zeroes": true, 00:15:15.603 "zcopy": true, 00:15:15.603 "get_zone_info": false, 00:15:15.603 "zone_management": false, 00:15:15.603 "zone_append": false, 00:15:15.603 "compare": false, 00:15:15.603 "compare_and_write": false, 00:15:15.603 "abort": true, 00:15:15.603 "seek_hole": false, 00:15:15.603 "seek_data": false, 00:15:15.603 "copy": true, 00:15:15.603 "nvme_iov_md": false 00:15:15.603 }, 00:15:15.603 "memory_domains": [ 00:15:15.603 { 00:15:15.603 "dma_device_id": "system", 00:15:15.603 "dma_device_type": 1 00:15:15.603 }, 00:15:15.603 { 00:15:15.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.603 "dma_device_type": 2 00:15:15.603 } 00:15:15.603 ], 00:15:15.603 "driver_specific": {} 00:15:15.603 } 00:15:15.603 ] 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.603 BaseBdev3 00:15:15.603 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.862 [ 00:15:15.862 { 00:15:15.862 "name": "BaseBdev3", 00:15:15.862 "aliases": [ 00:15:15.862 "b9bbd782-b11f-41bd-a890-a8a747c6554a" 00:15:15.862 ], 00:15:15.862 "product_name": "Malloc disk", 00:15:15.862 "block_size": 512, 00:15:15.862 "num_blocks": 65536, 00:15:15.862 "uuid": "b9bbd782-b11f-41bd-a890-a8a747c6554a", 00:15:15.862 "assigned_rate_limits": { 00:15:15.862 "rw_ios_per_sec": 0, 00:15:15.862 "rw_mbytes_per_sec": 0, 00:15:15.862 "r_mbytes_per_sec": 0, 00:15:15.862 "w_mbytes_per_sec": 0 00:15:15.862 }, 00:15:15.862 "claimed": false, 00:15:15.862 "zoned": false, 00:15:15.862 "supported_io_types": { 00:15:15.862 "read": true, 00:15:15.862 "write": true, 00:15:15.862 "unmap": true, 00:15:15.862 "flush": true, 00:15:15.862 "reset": true, 00:15:15.862 "nvme_admin": false, 00:15:15.862 "nvme_io": false, 00:15:15.862 "nvme_io_md": false, 00:15:15.862 "write_zeroes": true, 00:15:15.862 "zcopy": true, 00:15:15.862 "get_zone_info": false, 00:15:15.862 "zone_management": false, 00:15:15.862 "zone_append": false, 00:15:15.862 "compare": false, 00:15:15.862 "compare_and_write": false, 00:15:15.862 "abort": true, 00:15:15.862 "seek_hole": false, 00:15:15.862 "seek_data": false, 00:15:15.862 "copy": true, 00:15:15.862 "nvme_iov_md": false 00:15:15.862 }, 00:15:15.862 "memory_domains": [ 00:15:15.862 { 00:15:15.862 "dma_device_id": "system", 00:15:15.862 "dma_device_type": 1 00:15:15.862 }, 00:15:15.862 { 00:15:15.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.862 "dma_device_type": 2 00:15:15.862 } 00:15:15.862 ], 00:15:15.862 "driver_specific": {} 00:15:15.862 } 00:15:15.862 ] 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.862 [2024-10-15 09:14:33.541745] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:15.862 [2024-10-15 09:14:33.541912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:15.862 [2024-10-15 09:14:33.541983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:15.862 [2024-10-15 09:14:33.544265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.862 "name": "Existed_Raid", 00:15:15.862 "uuid": "43e14413-c683-4324-a40a-452872b62989", 00:15:15.862 "strip_size_kb": 64, 00:15:15.862 "state": "configuring", 00:15:15.862 "raid_level": "raid5f", 00:15:15.862 "superblock": true, 00:15:15.862 "num_base_bdevs": 3, 00:15:15.862 "num_base_bdevs_discovered": 2, 00:15:15.862 "num_base_bdevs_operational": 3, 00:15:15.862 "base_bdevs_list": [ 00:15:15.862 { 00:15:15.862 "name": "BaseBdev1", 00:15:15.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.862 "is_configured": false, 00:15:15.862 "data_offset": 0, 00:15:15.862 "data_size": 0 00:15:15.862 }, 00:15:15.862 { 00:15:15.862 "name": "BaseBdev2", 00:15:15.862 "uuid": "26808237-cb40-48d5-8c77-4af2a80df168", 00:15:15.862 "is_configured": true, 00:15:15.862 "data_offset": 2048, 00:15:15.862 "data_size": 63488 00:15:15.862 }, 00:15:15.862 { 00:15:15.862 "name": "BaseBdev3", 00:15:15.862 "uuid": "b9bbd782-b11f-41bd-a890-a8a747c6554a", 00:15:15.862 "is_configured": true, 00:15:15.862 "data_offset": 2048, 00:15:15.862 "data_size": 63488 00:15:15.862 } 00:15:15.862 ] 00:15:15.862 }' 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.862 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.120 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:16.120 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.120 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.120 [2024-10-15 09:14:33.965608] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:16.120 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.120 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:16.120 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:16.120 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:16.120 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.120 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.120 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:16.120 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.120 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.120 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.120 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.120 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.120 09:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.120 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.120 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.120 09:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.120 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.120 "name": "Existed_Raid", 00:15:16.120 "uuid": "43e14413-c683-4324-a40a-452872b62989", 00:15:16.120 "strip_size_kb": 64, 00:15:16.120 "state": "configuring", 00:15:16.120 "raid_level": "raid5f", 00:15:16.120 "superblock": true, 00:15:16.120 "num_base_bdevs": 3, 00:15:16.120 "num_base_bdevs_discovered": 1, 00:15:16.120 "num_base_bdevs_operational": 3, 00:15:16.120 "base_bdevs_list": [ 00:15:16.120 { 00:15:16.120 "name": "BaseBdev1", 00:15:16.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.120 "is_configured": false, 00:15:16.120 "data_offset": 0, 00:15:16.120 "data_size": 0 00:15:16.120 }, 00:15:16.120 { 00:15:16.120 "name": null, 00:15:16.120 "uuid": "26808237-cb40-48d5-8c77-4af2a80df168", 00:15:16.120 "is_configured": false, 00:15:16.120 "data_offset": 0, 00:15:16.120 "data_size": 63488 00:15:16.120 }, 00:15:16.120 { 00:15:16.120 "name": "BaseBdev3", 00:15:16.120 "uuid": "b9bbd782-b11f-41bd-a890-a8a747c6554a", 00:15:16.120 "is_configured": true, 00:15:16.120 "data_offset": 2048, 00:15:16.120 "data_size": 63488 00:15:16.120 } 00:15:16.120 ] 00:15:16.120 }' 00:15:16.120 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.120 09:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.687 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.687 09:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.687 09:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.687 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:16.687 09:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.687 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:16.687 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:16.687 09:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.687 09:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.687 [2024-10-15 09:14:34.486055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:16.687 BaseBdev1 00:15:16.687 09:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.687 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:16.687 09:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:16.687 09:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:16.687 09:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:16.687 09:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:16.687 09:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:16.687 09:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:16.687 09:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.687 09:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.687 09:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.687 09:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:16.687 09:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.687 09:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.687 [ 00:15:16.687 { 00:15:16.687 "name": "BaseBdev1", 00:15:16.687 "aliases": [ 00:15:16.687 "374c22d1-2ef7-4585-93ab-8206e0b28140" 00:15:16.687 ], 00:15:16.687 "product_name": "Malloc disk", 00:15:16.687 "block_size": 512, 00:15:16.687 "num_blocks": 65536, 00:15:16.687 "uuid": "374c22d1-2ef7-4585-93ab-8206e0b28140", 00:15:16.687 "assigned_rate_limits": { 00:15:16.687 "rw_ios_per_sec": 0, 00:15:16.687 "rw_mbytes_per_sec": 0, 00:15:16.687 "r_mbytes_per_sec": 0, 00:15:16.687 "w_mbytes_per_sec": 0 00:15:16.687 }, 00:15:16.687 "claimed": true, 00:15:16.687 "claim_type": "exclusive_write", 00:15:16.687 "zoned": false, 00:15:16.687 "supported_io_types": { 00:15:16.687 "read": true, 00:15:16.687 "write": true, 00:15:16.687 "unmap": true, 00:15:16.687 "flush": true, 00:15:16.687 "reset": true, 00:15:16.687 "nvme_admin": false, 00:15:16.687 "nvme_io": false, 00:15:16.688 "nvme_io_md": false, 00:15:16.688 "write_zeroes": true, 00:15:16.688 "zcopy": true, 00:15:16.688 "get_zone_info": false, 00:15:16.688 "zone_management": false, 00:15:16.688 "zone_append": false, 00:15:16.688 "compare": false, 00:15:16.688 "compare_and_write": false, 00:15:16.688 "abort": true, 00:15:16.688 "seek_hole": false, 00:15:16.688 "seek_data": false, 00:15:16.688 "copy": true, 00:15:16.688 "nvme_iov_md": false 00:15:16.688 }, 00:15:16.688 "memory_domains": [ 00:15:16.688 { 00:15:16.688 "dma_device_id": "system", 00:15:16.688 "dma_device_type": 1 00:15:16.688 }, 00:15:16.688 { 00:15:16.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.688 "dma_device_type": 2 00:15:16.688 } 00:15:16.688 ], 00:15:16.688 "driver_specific": {} 00:15:16.688 } 00:15:16.688 ] 00:15:16.688 09:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.688 09:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:16.688 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:16.688 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:16.688 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:16.688 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.688 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.688 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:16.688 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.688 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.688 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.688 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.688 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.688 09:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.688 09:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.688 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.688 09:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.688 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.688 "name": "Existed_Raid", 00:15:16.688 "uuid": "43e14413-c683-4324-a40a-452872b62989", 00:15:16.688 "strip_size_kb": 64, 00:15:16.688 "state": "configuring", 00:15:16.688 "raid_level": "raid5f", 00:15:16.688 "superblock": true, 00:15:16.688 "num_base_bdevs": 3, 00:15:16.688 "num_base_bdevs_discovered": 2, 00:15:16.688 "num_base_bdevs_operational": 3, 00:15:16.688 "base_bdevs_list": [ 00:15:16.688 { 00:15:16.688 "name": "BaseBdev1", 00:15:16.688 "uuid": "374c22d1-2ef7-4585-93ab-8206e0b28140", 00:15:16.688 "is_configured": true, 00:15:16.688 "data_offset": 2048, 00:15:16.688 "data_size": 63488 00:15:16.688 }, 00:15:16.688 { 00:15:16.688 "name": null, 00:15:16.688 "uuid": "26808237-cb40-48d5-8c77-4af2a80df168", 00:15:16.688 "is_configured": false, 00:15:16.688 "data_offset": 0, 00:15:16.688 "data_size": 63488 00:15:16.688 }, 00:15:16.688 { 00:15:16.688 "name": "BaseBdev3", 00:15:16.688 "uuid": "b9bbd782-b11f-41bd-a890-a8a747c6554a", 00:15:16.688 "is_configured": true, 00:15:16.688 "data_offset": 2048, 00:15:16.688 "data_size": 63488 00:15:16.688 } 00:15:16.688 ] 00:15:16.688 }' 00:15:16.688 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.688 09:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.259 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:17.259 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.259 09:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.259 09:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.259 09:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.259 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:17.259 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:17.259 09:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.259 09:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.259 [2024-10-15 09:14:34.977886] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:17.259 09:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.259 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:17.259 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:17.259 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:17.259 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.259 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.259 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.259 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.259 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.259 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.259 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.259 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.259 09:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.259 09:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.259 09:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.259 09:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.259 09:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.259 "name": "Existed_Raid", 00:15:17.259 "uuid": "43e14413-c683-4324-a40a-452872b62989", 00:15:17.259 "strip_size_kb": 64, 00:15:17.259 "state": "configuring", 00:15:17.259 "raid_level": "raid5f", 00:15:17.259 "superblock": true, 00:15:17.259 "num_base_bdevs": 3, 00:15:17.259 "num_base_bdevs_discovered": 1, 00:15:17.259 "num_base_bdevs_operational": 3, 00:15:17.259 "base_bdevs_list": [ 00:15:17.259 { 00:15:17.259 "name": "BaseBdev1", 00:15:17.259 "uuid": "374c22d1-2ef7-4585-93ab-8206e0b28140", 00:15:17.259 "is_configured": true, 00:15:17.259 "data_offset": 2048, 00:15:17.259 "data_size": 63488 00:15:17.259 }, 00:15:17.259 { 00:15:17.259 "name": null, 00:15:17.259 "uuid": "26808237-cb40-48d5-8c77-4af2a80df168", 00:15:17.259 "is_configured": false, 00:15:17.259 "data_offset": 0, 00:15:17.259 "data_size": 63488 00:15:17.259 }, 00:15:17.259 { 00:15:17.259 "name": null, 00:15:17.259 "uuid": "b9bbd782-b11f-41bd-a890-a8a747c6554a", 00:15:17.259 "is_configured": false, 00:15:17.259 "data_offset": 0, 00:15:17.259 "data_size": 63488 00:15:17.259 } 00:15:17.259 ] 00:15:17.259 }' 00:15:17.259 09:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.259 09:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.836 09:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.836 09:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:17.836 09:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.836 09:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.836 09:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.836 09:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:17.836 09:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:17.836 09:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.836 09:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.836 [2024-10-15 09:14:35.489769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:17.836 09:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.836 09:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:17.836 09:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:17.836 09:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:17.836 09:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.836 09:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.836 09:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.836 09:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.836 09:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.836 09:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.836 09:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.836 09:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.836 09:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.836 09:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.837 09:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.837 09:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.837 09:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.837 "name": "Existed_Raid", 00:15:17.837 "uuid": "43e14413-c683-4324-a40a-452872b62989", 00:15:17.837 "strip_size_kb": 64, 00:15:17.837 "state": "configuring", 00:15:17.837 "raid_level": "raid5f", 00:15:17.837 "superblock": true, 00:15:17.837 "num_base_bdevs": 3, 00:15:17.837 "num_base_bdevs_discovered": 2, 00:15:17.837 "num_base_bdevs_operational": 3, 00:15:17.837 "base_bdevs_list": [ 00:15:17.837 { 00:15:17.837 "name": "BaseBdev1", 00:15:17.837 "uuid": "374c22d1-2ef7-4585-93ab-8206e0b28140", 00:15:17.837 "is_configured": true, 00:15:17.837 "data_offset": 2048, 00:15:17.837 "data_size": 63488 00:15:17.837 }, 00:15:17.837 { 00:15:17.837 "name": null, 00:15:17.837 "uuid": "26808237-cb40-48d5-8c77-4af2a80df168", 00:15:17.837 "is_configured": false, 00:15:17.837 "data_offset": 0, 00:15:17.837 "data_size": 63488 00:15:17.837 }, 00:15:17.837 { 00:15:17.837 "name": "BaseBdev3", 00:15:17.837 "uuid": "b9bbd782-b11f-41bd-a890-a8a747c6554a", 00:15:17.837 "is_configured": true, 00:15:17.837 "data_offset": 2048, 00:15:17.837 "data_size": 63488 00:15:17.837 } 00:15:17.837 ] 00:15:17.837 }' 00:15:17.837 09:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.837 09:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.095 09:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:18.095 09:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.095 09:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.095 09:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.095 09:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.095 09:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:18.095 09:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:18.095 09:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.095 09:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.095 [2024-10-15 09:14:35.961664] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:18.355 09:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.355 09:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:18.355 09:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:18.355 09:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:18.355 09:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:18.355 09:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.355 09:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.355 09:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.355 09:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.355 09:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.355 09:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.355 09:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.355 09:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.355 09:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.355 09:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.355 09:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.355 09:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.355 "name": "Existed_Raid", 00:15:18.355 "uuid": "43e14413-c683-4324-a40a-452872b62989", 00:15:18.355 "strip_size_kb": 64, 00:15:18.355 "state": "configuring", 00:15:18.355 "raid_level": "raid5f", 00:15:18.355 "superblock": true, 00:15:18.355 "num_base_bdevs": 3, 00:15:18.355 "num_base_bdevs_discovered": 1, 00:15:18.355 "num_base_bdevs_operational": 3, 00:15:18.355 "base_bdevs_list": [ 00:15:18.355 { 00:15:18.355 "name": null, 00:15:18.355 "uuid": "374c22d1-2ef7-4585-93ab-8206e0b28140", 00:15:18.355 "is_configured": false, 00:15:18.355 "data_offset": 0, 00:15:18.355 "data_size": 63488 00:15:18.355 }, 00:15:18.355 { 00:15:18.355 "name": null, 00:15:18.355 "uuid": "26808237-cb40-48d5-8c77-4af2a80df168", 00:15:18.355 "is_configured": false, 00:15:18.355 "data_offset": 0, 00:15:18.355 "data_size": 63488 00:15:18.355 }, 00:15:18.355 { 00:15:18.355 "name": "BaseBdev3", 00:15:18.355 "uuid": "b9bbd782-b11f-41bd-a890-a8a747c6554a", 00:15:18.355 "is_configured": true, 00:15:18.355 "data_offset": 2048, 00:15:18.355 "data_size": 63488 00:15:18.355 } 00:15:18.355 ] 00:15:18.355 }' 00:15:18.355 09:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.355 09:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.924 09:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:18.924 09:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.924 09:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.924 09:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.924 09:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.924 09:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:18.924 09:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:18.924 09:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.924 09:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.924 [2024-10-15 09:14:36.565920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:18.924 09:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.924 09:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:18.924 09:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:18.924 09:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:18.924 09:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:18.924 09:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.924 09:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.924 09:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.924 09:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.924 09:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.924 09:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.924 09:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.924 09:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.924 09:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.924 09:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.924 09:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.924 09:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.924 "name": "Existed_Raid", 00:15:18.924 "uuid": "43e14413-c683-4324-a40a-452872b62989", 00:15:18.924 "strip_size_kb": 64, 00:15:18.925 "state": "configuring", 00:15:18.925 "raid_level": "raid5f", 00:15:18.925 "superblock": true, 00:15:18.925 "num_base_bdevs": 3, 00:15:18.925 "num_base_bdevs_discovered": 2, 00:15:18.925 "num_base_bdevs_operational": 3, 00:15:18.925 "base_bdevs_list": [ 00:15:18.925 { 00:15:18.925 "name": null, 00:15:18.925 "uuid": "374c22d1-2ef7-4585-93ab-8206e0b28140", 00:15:18.925 "is_configured": false, 00:15:18.925 "data_offset": 0, 00:15:18.925 "data_size": 63488 00:15:18.925 }, 00:15:18.925 { 00:15:18.925 "name": "BaseBdev2", 00:15:18.925 "uuid": "26808237-cb40-48d5-8c77-4af2a80df168", 00:15:18.925 "is_configured": true, 00:15:18.925 "data_offset": 2048, 00:15:18.925 "data_size": 63488 00:15:18.925 }, 00:15:18.925 { 00:15:18.925 "name": "BaseBdev3", 00:15:18.925 "uuid": "b9bbd782-b11f-41bd-a890-a8a747c6554a", 00:15:18.925 "is_configured": true, 00:15:18.925 "data_offset": 2048, 00:15:18.925 "data_size": 63488 00:15:18.925 } 00:15:18.925 ] 00:15:18.925 }' 00:15:18.925 09:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.925 09:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.184 09:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.184 09:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.184 09:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.184 09:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:19.184 09:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.184 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:19.184 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.184 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.184 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.184 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:19.184 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.184 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 374c22d1-2ef7-4585-93ab-8206e0b28140 00:15:19.184 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.184 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.444 [2024-10-15 09:14:37.106014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:19.444 [2024-10-15 09:14:37.106402] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:19.444 [2024-10-15 09:14:37.106429] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:19.444 [2024-10-15 09:14:37.106756] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:19.444 NewBaseBdev 00:15:19.444 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.444 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:19.444 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:19.444 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:19.444 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:19.444 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:19.444 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:19.444 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:19.444 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.444 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.444 [2024-10-15 09:14:37.113877] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:19.444 [2024-10-15 09:14:37.113999] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:19.444 [2024-10-15 09:14:37.114467] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.444 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.444 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:19.444 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.444 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.444 [ 00:15:19.444 { 00:15:19.444 "name": "NewBaseBdev", 00:15:19.444 "aliases": [ 00:15:19.444 "374c22d1-2ef7-4585-93ab-8206e0b28140" 00:15:19.444 ], 00:15:19.444 "product_name": "Malloc disk", 00:15:19.444 "block_size": 512, 00:15:19.444 "num_blocks": 65536, 00:15:19.444 "uuid": "374c22d1-2ef7-4585-93ab-8206e0b28140", 00:15:19.444 "assigned_rate_limits": { 00:15:19.444 "rw_ios_per_sec": 0, 00:15:19.444 "rw_mbytes_per_sec": 0, 00:15:19.444 "r_mbytes_per_sec": 0, 00:15:19.444 "w_mbytes_per_sec": 0 00:15:19.444 }, 00:15:19.444 "claimed": true, 00:15:19.444 "claim_type": "exclusive_write", 00:15:19.444 "zoned": false, 00:15:19.444 "supported_io_types": { 00:15:19.444 "read": true, 00:15:19.444 "write": true, 00:15:19.444 "unmap": true, 00:15:19.444 "flush": true, 00:15:19.444 "reset": true, 00:15:19.444 "nvme_admin": false, 00:15:19.444 "nvme_io": false, 00:15:19.444 "nvme_io_md": false, 00:15:19.444 "write_zeroes": true, 00:15:19.444 "zcopy": true, 00:15:19.444 "get_zone_info": false, 00:15:19.444 "zone_management": false, 00:15:19.444 "zone_append": false, 00:15:19.444 "compare": false, 00:15:19.444 "compare_and_write": false, 00:15:19.444 "abort": true, 00:15:19.444 "seek_hole": false, 00:15:19.444 "seek_data": false, 00:15:19.444 "copy": true, 00:15:19.444 "nvme_iov_md": false 00:15:19.444 }, 00:15:19.444 "memory_domains": [ 00:15:19.444 { 00:15:19.444 "dma_device_id": "system", 00:15:19.444 "dma_device_type": 1 00:15:19.444 }, 00:15:19.444 { 00:15:19.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.444 "dma_device_type": 2 00:15:19.444 } 00:15:19.444 ], 00:15:19.444 "driver_specific": {} 00:15:19.444 } 00:15:19.444 ] 00:15:19.444 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.444 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:19.444 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:19.444 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.444 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.444 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.444 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.444 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:19.444 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.444 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.445 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.445 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.445 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.445 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.445 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.445 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.445 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.445 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.445 "name": "Existed_Raid", 00:15:19.445 "uuid": "43e14413-c683-4324-a40a-452872b62989", 00:15:19.445 "strip_size_kb": 64, 00:15:19.445 "state": "online", 00:15:19.445 "raid_level": "raid5f", 00:15:19.445 "superblock": true, 00:15:19.445 "num_base_bdevs": 3, 00:15:19.445 "num_base_bdevs_discovered": 3, 00:15:19.445 "num_base_bdevs_operational": 3, 00:15:19.445 "base_bdevs_list": [ 00:15:19.445 { 00:15:19.445 "name": "NewBaseBdev", 00:15:19.445 "uuid": "374c22d1-2ef7-4585-93ab-8206e0b28140", 00:15:19.445 "is_configured": true, 00:15:19.445 "data_offset": 2048, 00:15:19.445 "data_size": 63488 00:15:19.445 }, 00:15:19.445 { 00:15:19.445 "name": "BaseBdev2", 00:15:19.445 "uuid": "26808237-cb40-48d5-8c77-4af2a80df168", 00:15:19.445 "is_configured": true, 00:15:19.445 "data_offset": 2048, 00:15:19.445 "data_size": 63488 00:15:19.445 }, 00:15:19.445 { 00:15:19.445 "name": "BaseBdev3", 00:15:19.445 "uuid": "b9bbd782-b11f-41bd-a890-a8a747c6554a", 00:15:19.445 "is_configured": true, 00:15:19.445 "data_offset": 2048, 00:15:19.445 "data_size": 63488 00:15:19.445 } 00:15:19.445 ] 00:15:19.445 }' 00:15:19.445 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.445 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.705 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:19.705 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:19.705 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:19.705 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:19.705 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:19.705 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:19.705 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:19.705 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:19.705 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.705 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.964 [2024-10-15 09:14:37.605952] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:19.964 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.964 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:19.964 "name": "Existed_Raid", 00:15:19.964 "aliases": [ 00:15:19.964 "43e14413-c683-4324-a40a-452872b62989" 00:15:19.964 ], 00:15:19.964 "product_name": "Raid Volume", 00:15:19.964 "block_size": 512, 00:15:19.964 "num_blocks": 126976, 00:15:19.964 "uuid": "43e14413-c683-4324-a40a-452872b62989", 00:15:19.964 "assigned_rate_limits": { 00:15:19.964 "rw_ios_per_sec": 0, 00:15:19.964 "rw_mbytes_per_sec": 0, 00:15:19.964 "r_mbytes_per_sec": 0, 00:15:19.964 "w_mbytes_per_sec": 0 00:15:19.964 }, 00:15:19.964 "claimed": false, 00:15:19.964 "zoned": false, 00:15:19.964 "supported_io_types": { 00:15:19.964 "read": true, 00:15:19.964 "write": true, 00:15:19.964 "unmap": false, 00:15:19.964 "flush": false, 00:15:19.964 "reset": true, 00:15:19.964 "nvme_admin": false, 00:15:19.964 "nvme_io": false, 00:15:19.964 "nvme_io_md": false, 00:15:19.964 "write_zeroes": true, 00:15:19.964 "zcopy": false, 00:15:19.964 "get_zone_info": false, 00:15:19.964 "zone_management": false, 00:15:19.964 "zone_append": false, 00:15:19.964 "compare": false, 00:15:19.964 "compare_and_write": false, 00:15:19.964 "abort": false, 00:15:19.964 "seek_hole": false, 00:15:19.964 "seek_data": false, 00:15:19.964 "copy": false, 00:15:19.964 "nvme_iov_md": false 00:15:19.964 }, 00:15:19.964 "driver_specific": { 00:15:19.964 "raid": { 00:15:19.964 "uuid": "43e14413-c683-4324-a40a-452872b62989", 00:15:19.964 "strip_size_kb": 64, 00:15:19.964 "state": "online", 00:15:19.964 "raid_level": "raid5f", 00:15:19.964 "superblock": true, 00:15:19.964 "num_base_bdevs": 3, 00:15:19.964 "num_base_bdevs_discovered": 3, 00:15:19.964 "num_base_bdevs_operational": 3, 00:15:19.964 "base_bdevs_list": [ 00:15:19.964 { 00:15:19.964 "name": "NewBaseBdev", 00:15:19.964 "uuid": "374c22d1-2ef7-4585-93ab-8206e0b28140", 00:15:19.964 "is_configured": true, 00:15:19.964 "data_offset": 2048, 00:15:19.964 "data_size": 63488 00:15:19.964 }, 00:15:19.964 { 00:15:19.964 "name": "BaseBdev2", 00:15:19.964 "uuid": "26808237-cb40-48d5-8c77-4af2a80df168", 00:15:19.964 "is_configured": true, 00:15:19.964 "data_offset": 2048, 00:15:19.964 "data_size": 63488 00:15:19.964 }, 00:15:19.964 { 00:15:19.964 "name": "BaseBdev3", 00:15:19.964 "uuid": "b9bbd782-b11f-41bd-a890-a8a747c6554a", 00:15:19.964 "is_configured": true, 00:15:19.964 "data_offset": 2048, 00:15:19.964 "data_size": 63488 00:15:19.964 } 00:15:19.964 ] 00:15:19.964 } 00:15:19.964 } 00:15:19.964 }' 00:15:19.964 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:19.964 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:19.964 BaseBdev2 00:15:19.964 BaseBdev3' 00:15:19.964 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:19.964 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:19.964 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:19.964 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:19.964 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:19.964 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.964 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.964 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.965 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:19.965 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:19.965 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:19.965 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:19.965 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:19.965 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.965 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.965 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.965 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:19.965 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:19.965 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:19.965 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:19.965 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:19.965 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.965 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.965 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.224 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:20.224 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:20.224 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:20.224 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.224 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.224 [2024-10-15 09:14:37.873652] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:20.224 [2024-10-15 09:14:37.873802] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:20.224 [2024-10-15 09:14:37.873941] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:20.224 [2024-10-15 09:14:37.874330] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:20.224 [2024-10-15 09:14:37.874401] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:20.224 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.224 09:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80754 00:15:20.224 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 80754 ']' 00:15:20.224 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 80754 00:15:20.224 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:20.224 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:20.225 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80754 00:15:20.225 killing process with pid 80754 00:15:20.225 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:20.225 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:20.225 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80754' 00:15:20.225 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 80754 00:15:20.225 09:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 80754 00:15:20.225 [2024-10-15 09:14:37.919933] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:20.484 [2024-10-15 09:14:38.281767] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:21.861 ************************************ 00:15:21.861 END TEST raid5f_state_function_test_sb 00:15:21.861 ************************************ 00:15:21.861 09:14:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:21.861 00:15:21.861 real 0m11.108s 00:15:21.861 user 0m17.321s 00:15:21.861 sys 0m1.965s 00:15:21.861 09:14:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:21.861 09:14:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.861 09:14:39 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:15:21.861 09:14:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:21.861 09:14:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:21.861 09:14:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:21.861 ************************************ 00:15:21.861 START TEST raid5f_superblock_test 00:15:21.861 ************************************ 00:15:21.861 09:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:15:21.861 09:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:21.861 09:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:21.861 09:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:21.861 09:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:21.861 09:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:21.861 09:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:21.861 09:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:21.861 09:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:21.861 09:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:21.861 09:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:21.861 09:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:21.861 09:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:21.861 09:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:21.861 09:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:21.861 09:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:21.861 09:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:21.861 09:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81382 00:15:21.861 09:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:21.861 09:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81382 00:15:21.861 09:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 81382 ']' 00:15:21.861 09:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.861 09:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:21.861 09:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.861 09:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:21.861 09:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.119 [2024-10-15 09:14:39.761527] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:15:22.119 [2024-10-15 09:14:39.761817] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81382 ] 00:15:22.119 [2024-10-15 09:14:39.937953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.377 [2024-10-15 09:14:40.077125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.636 [2024-10-15 09:14:40.315656] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:22.636 [2024-10-15 09:14:40.315806] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:22.895 09:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:22.895 09:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:15:22.895 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:22.895 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:22.895 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:22.895 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:22.895 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:22.895 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:22.895 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:22.895 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:22.895 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:22.895 09:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.895 09:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.895 malloc1 00:15:22.895 09:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.895 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:22.895 09:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.895 09:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.895 [2024-10-15 09:14:40.718969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:22.895 [2024-10-15 09:14:40.719176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.896 [2024-10-15 09:14:40.719243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:22.896 [2024-10-15 09:14:40.719281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.896 [2024-10-15 09:14:40.722033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.896 [2024-10-15 09:14:40.722163] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:22.896 pt1 00:15:22.896 09:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.896 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:22.896 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:22.896 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:22.896 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:22.896 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:22.896 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:22.896 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:22.896 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:22.896 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:22.896 09:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.896 09:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.896 malloc2 00:15:22.896 09:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.896 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:22.896 09:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.896 09:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.896 [2024-10-15 09:14:40.779907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:22.896 [2024-10-15 09:14:40.780090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.896 [2024-10-15 09:14:40.780157] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:22.896 [2024-10-15 09:14:40.780196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.896 [2024-10-15 09:14:40.782867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.896 [2024-10-15 09:14:40.782984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:22.896 pt2 00:15:22.896 09:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.896 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:22.896 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:22.896 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:22.896 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:22.896 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:22.896 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:22.896 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:22.896 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:22.896 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:22.896 09:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.896 09:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.155 malloc3 00:15:23.155 09:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.155 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:23.155 09:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.155 09:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.155 [2024-10-15 09:14:40.863886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:23.155 [2024-10-15 09:14:40.864072] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.155 [2024-10-15 09:14:40.864140] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:23.155 [2024-10-15 09:14:40.864179] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.155 [2024-10-15 09:14:40.866842] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.155 [2024-10-15 09:14:40.866967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:23.155 pt3 00:15:23.155 09:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.155 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:23.155 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:23.155 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:23.155 09:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.155 09:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.155 [2024-10-15 09:14:40.875988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:23.155 [2024-10-15 09:14:40.878520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:23.155 [2024-10-15 09:14:40.878715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:23.155 [2024-10-15 09:14:40.879032] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:23.155 [2024-10-15 09:14:40.879112] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:23.155 [2024-10-15 09:14:40.879483] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:23.155 [2024-10-15 09:14:40.886417] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:23.155 [2024-10-15 09:14:40.886509] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:23.155 [2024-10-15 09:14:40.886889] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.155 09:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.155 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:23.155 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.155 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.155 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.155 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.155 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:23.156 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.156 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.156 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.156 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.156 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.156 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.156 09:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.156 09:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.156 09:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.156 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.156 "name": "raid_bdev1", 00:15:23.156 "uuid": "1aa53286-d510-4f0e-a8f2-287860246b77", 00:15:23.156 "strip_size_kb": 64, 00:15:23.156 "state": "online", 00:15:23.156 "raid_level": "raid5f", 00:15:23.156 "superblock": true, 00:15:23.156 "num_base_bdevs": 3, 00:15:23.156 "num_base_bdevs_discovered": 3, 00:15:23.156 "num_base_bdevs_operational": 3, 00:15:23.156 "base_bdevs_list": [ 00:15:23.156 { 00:15:23.156 "name": "pt1", 00:15:23.156 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:23.156 "is_configured": true, 00:15:23.156 "data_offset": 2048, 00:15:23.156 "data_size": 63488 00:15:23.156 }, 00:15:23.156 { 00:15:23.156 "name": "pt2", 00:15:23.156 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:23.156 "is_configured": true, 00:15:23.156 "data_offset": 2048, 00:15:23.156 "data_size": 63488 00:15:23.156 }, 00:15:23.156 { 00:15:23.156 "name": "pt3", 00:15:23.156 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:23.156 "is_configured": true, 00:15:23.156 "data_offset": 2048, 00:15:23.156 "data_size": 63488 00:15:23.156 } 00:15:23.156 ] 00:15:23.156 }' 00:15:23.156 09:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.156 09:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.724 [2024-10-15 09:14:41.354149] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:23.724 "name": "raid_bdev1", 00:15:23.724 "aliases": [ 00:15:23.724 "1aa53286-d510-4f0e-a8f2-287860246b77" 00:15:23.724 ], 00:15:23.724 "product_name": "Raid Volume", 00:15:23.724 "block_size": 512, 00:15:23.724 "num_blocks": 126976, 00:15:23.724 "uuid": "1aa53286-d510-4f0e-a8f2-287860246b77", 00:15:23.724 "assigned_rate_limits": { 00:15:23.724 "rw_ios_per_sec": 0, 00:15:23.724 "rw_mbytes_per_sec": 0, 00:15:23.724 "r_mbytes_per_sec": 0, 00:15:23.724 "w_mbytes_per_sec": 0 00:15:23.724 }, 00:15:23.724 "claimed": false, 00:15:23.724 "zoned": false, 00:15:23.724 "supported_io_types": { 00:15:23.724 "read": true, 00:15:23.724 "write": true, 00:15:23.724 "unmap": false, 00:15:23.724 "flush": false, 00:15:23.724 "reset": true, 00:15:23.724 "nvme_admin": false, 00:15:23.724 "nvme_io": false, 00:15:23.724 "nvme_io_md": false, 00:15:23.724 "write_zeroes": true, 00:15:23.724 "zcopy": false, 00:15:23.724 "get_zone_info": false, 00:15:23.724 "zone_management": false, 00:15:23.724 "zone_append": false, 00:15:23.724 "compare": false, 00:15:23.724 "compare_and_write": false, 00:15:23.724 "abort": false, 00:15:23.724 "seek_hole": false, 00:15:23.724 "seek_data": false, 00:15:23.724 "copy": false, 00:15:23.724 "nvme_iov_md": false 00:15:23.724 }, 00:15:23.724 "driver_specific": { 00:15:23.724 "raid": { 00:15:23.724 "uuid": "1aa53286-d510-4f0e-a8f2-287860246b77", 00:15:23.724 "strip_size_kb": 64, 00:15:23.724 "state": "online", 00:15:23.724 "raid_level": "raid5f", 00:15:23.724 "superblock": true, 00:15:23.724 "num_base_bdevs": 3, 00:15:23.724 "num_base_bdevs_discovered": 3, 00:15:23.724 "num_base_bdevs_operational": 3, 00:15:23.724 "base_bdevs_list": [ 00:15:23.724 { 00:15:23.724 "name": "pt1", 00:15:23.724 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:23.724 "is_configured": true, 00:15:23.724 "data_offset": 2048, 00:15:23.724 "data_size": 63488 00:15:23.724 }, 00:15:23.724 { 00:15:23.724 "name": "pt2", 00:15:23.724 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:23.724 "is_configured": true, 00:15:23.724 "data_offset": 2048, 00:15:23.724 "data_size": 63488 00:15:23.724 }, 00:15:23.724 { 00:15:23.724 "name": "pt3", 00:15:23.724 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:23.724 "is_configured": true, 00:15:23.724 "data_offset": 2048, 00:15:23.724 "data_size": 63488 00:15:23.724 } 00:15:23.724 ] 00:15:23.724 } 00:15:23.724 } 00:15:23.724 }' 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:23.724 pt2 00:15:23.724 pt3' 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.724 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.984 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:23.984 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:23.984 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:23.984 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:23.984 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.984 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.984 [2024-10-15 09:14:41.677964] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:23.984 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.984 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1aa53286-d510-4f0e-a8f2-287860246b77 00:15:23.984 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1aa53286-d510-4f0e-a8f2-287860246b77 ']' 00:15:23.984 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:23.984 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.984 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.984 [2024-10-15 09:14:41.721653] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:23.984 [2024-10-15 09:14:41.721819] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:23.984 [2024-10-15 09:14:41.721948] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:23.984 [2024-10-15 09:14:41.722075] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:23.984 [2024-10-15 09:14:41.722129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:23.984 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.984 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.984 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:23.984 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.984 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.984 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.984 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:23.984 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:23.984 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:23.984 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:23.984 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.984 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.984 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.984 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:23.984 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:23.984 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.984 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.984 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.984 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:23.985 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:23.985 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.985 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.985 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.985 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:23.985 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.985 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:23.985 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.985 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.985 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:23.985 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:23.985 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:23.985 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:23.985 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:23.985 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:23.985 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:23.985 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:23.985 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:23.985 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.985 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.985 [2024-10-15 09:14:41.873889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:23.985 [2024-10-15 09:14:41.878320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:23.985 [2024-10-15 09:14:41.878606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:23.985 [2024-10-15 09:14:41.878839] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:23.985 [2024-10-15 09:14:41.879023] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:23.985 [2024-10-15 09:14:41.879078] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:23.985 [2024-10-15 09:14:41.879121] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:23.985 [2024-10-15 09:14:41.879143] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:23.985 request: 00:15:23.985 { 00:15:23.985 "name": "raid_bdev1", 00:15:23.985 "raid_level": "raid5f", 00:15:23.985 "base_bdevs": [ 00:15:24.245 "malloc1", 00:15:24.245 "malloc2", 00:15:24.245 "malloc3" 00:15:24.245 ], 00:15:24.245 "strip_size_kb": 64, 00:15:24.245 "superblock": false, 00:15:24.245 "method": "bdev_raid_create", 00:15:24.245 "req_id": 1 00:15:24.245 } 00:15:24.245 Got JSON-RPC error response 00:15:24.245 response: 00:15:24.245 { 00:15:24.245 "code": -17, 00:15:24.245 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:24.245 } 00:15:24.245 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:24.245 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:24.245 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:24.245 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:24.245 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:24.245 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.245 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.245 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.245 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:24.245 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.245 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:24.245 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:24.245 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:24.245 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.245 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.245 [2024-10-15 09:14:41.942906] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:24.245 [2024-10-15 09:14:41.943005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.245 [2024-10-15 09:14:41.943030] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:24.245 [2024-10-15 09:14:41.943042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.245 [2024-10-15 09:14:41.945715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.245 [2024-10-15 09:14:41.945773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:24.245 [2024-10-15 09:14:41.945890] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:24.245 [2024-10-15 09:14:41.945954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:24.245 pt1 00:15:24.245 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.245 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:24.245 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.245 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.245 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.245 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.245 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.245 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.245 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.245 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.245 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.245 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.245 09:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.245 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.245 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.245 09:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.245 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.245 "name": "raid_bdev1", 00:15:24.245 "uuid": "1aa53286-d510-4f0e-a8f2-287860246b77", 00:15:24.245 "strip_size_kb": 64, 00:15:24.245 "state": "configuring", 00:15:24.245 "raid_level": "raid5f", 00:15:24.245 "superblock": true, 00:15:24.245 "num_base_bdevs": 3, 00:15:24.245 "num_base_bdevs_discovered": 1, 00:15:24.245 "num_base_bdevs_operational": 3, 00:15:24.245 "base_bdevs_list": [ 00:15:24.245 { 00:15:24.245 "name": "pt1", 00:15:24.245 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:24.245 "is_configured": true, 00:15:24.245 "data_offset": 2048, 00:15:24.245 "data_size": 63488 00:15:24.245 }, 00:15:24.245 { 00:15:24.245 "name": null, 00:15:24.245 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:24.245 "is_configured": false, 00:15:24.245 "data_offset": 2048, 00:15:24.245 "data_size": 63488 00:15:24.245 }, 00:15:24.245 { 00:15:24.245 "name": null, 00:15:24.245 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:24.245 "is_configured": false, 00:15:24.245 "data_offset": 2048, 00:15:24.245 "data_size": 63488 00:15:24.245 } 00:15:24.245 ] 00:15:24.245 }' 00:15:24.245 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.245 09:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.505 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:24.505 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:24.505 09:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.505 09:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.505 [2024-10-15 09:14:42.394328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:24.505 [2024-10-15 09:14:42.394513] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.505 [2024-10-15 09:14:42.394562] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:24.505 [2024-10-15 09:14:42.394598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.506 [2024-10-15 09:14:42.395185] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.506 [2024-10-15 09:14:42.395277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:24.506 [2024-10-15 09:14:42.395423] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:24.506 [2024-10-15 09:14:42.395484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:24.506 pt2 00:15:24.506 09:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.506 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:24.506 09:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.506 09:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.765 [2024-10-15 09:14:42.402361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:24.765 09:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.765 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:24.765 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.765 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.765 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.765 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.765 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.765 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.765 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.765 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.765 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.765 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.765 09:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.765 09:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.765 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.765 09:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.766 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.766 "name": "raid_bdev1", 00:15:24.766 "uuid": "1aa53286-d510-4f0e-a8f2-287860246b77", 00:15:24.766 "strip_size_kb": 64, 00:15:24.766 "state": "configuring", 00:15:24.766 "raid_level": "raid5f", 00:15:24.766 "superblock": true, 00:15:24.766 "num_base_bdevs": 3, 00:15:24.766 "num_base_bdevs_discovered": 1, 00:15:24.766 "num_base_bdevs_operational": 3, 00:15:24.766 "base_bdevs_list": [ 00:15:24.766 { 00:15:24.766 "name": "pt1", 00:15:24.766 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:24.766 "is_configured": true, 00:15:24.766 "data_offset": 2048, 00:15:24.766 "data_size": 63488 00:15:24.766 }, 00:15:24.766 { 00:15:24.766 "name": null, 00:15:24.766 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:24.766 "is_configured": false, 00:15:24.766 "data_offset": 0, 00:15:24.766 "data_size": 63488 00:15:24.766 }, 00:15:24.766 { 00:15:24.766 "name": null, 00:15:24.766 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:24.766 "is_configured": false, 00:15:24.766 "data_offset": 2048, 00:15:24.766 "data_size": 63488 00:15:24.766 } 00:15:24.766 ] 00:15:24.766 }' 00:15:24.766 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.766 09:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.024 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:25.024 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:25.024 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:25.024 09:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.024 09:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.025 [2024-10-15 09:14:42.853619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:25.025 [2024-10-15 09:14:42.853849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.025 [2024-10-15 09:14:42.853913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:25.025 [2024-10-15 09:14:42.853954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.025 [2024-10-15 09:14:42.854557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.025 [2024-10-15 09:14:42.854646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:25.025 [2024-10-15 09:14:42.854805] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:25.025 [2024-10-15 09:14:42.854873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:25.025 pt2 00:15:25.025 09:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.025 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:25.025 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:25.025 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:25.025 09:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.025 09:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.025 [2024-10-15 09:14:42.865638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:25.025 [2024-10-15 09:14:42.865819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.025 [2024-10-15 09:14:42.865872] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:25.025 [2024-10-15 09:14:42.865911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.025 [2024-10-15 09:14:42.866454] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.025 [2024-10-15 09:14:42.866533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:25.025 [2024-10-15 09:14:42.866660] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:25.025 [2024-10-15 09:14:42.866736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:25.025 [2024-10-15 09:14:42.866941] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:25.025 [2024-10-15 09:14:42.866989] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:25.025 [2024-10-15 09:14:42.867302] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:25.025 [2024-10-15 09:14:42.873524] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:25.025 [2024-10-15 09:14:42.873607] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:25.025 [2024-10-15 09:14:42.873950] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.025 pt3 00:15:25.025 09:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.025 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:25.025 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:25.025 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:25.025 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.025 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.025 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.025 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.025 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.025 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.025 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.025 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.025 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.025 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.025 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.025 09:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.025 09:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.025 09:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.285 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.285 "name": "raid_bdev1", 00:15:25.285 "uuid": "1aa53286-d510-4f0e-a8f2-287860246b77", 00:15:25.285 "strip_size_kb": 64, 00:15:25.285 "state": "online", 00:15:25.285 "raid_level": "raid5f", 00:15:25.285 "superblock": true, 00:15:25.285 "num_base_bdevs": 3, 00:15:25.285 "num_base_bdevs_discovered": 3, 00:15:25.285 "num_base_bdevs_operational": 3, 00:15:25.285 "base_bdevs_list": [ 00:15:25.285 { 00:15:25.285 "name": "pt1", 00:15:25.285 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:25.285 "is_configured": true, 00:15:25.285 "data_offset": 2048, 00:15:25.285 "data_size": 63488 00:15:25.285 }, 00:15:25.285 { 00:15:25.285 "name": "pt2", 00:15:25.285 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:25.285 "is_configured": true, 00:15:25.285 "data_offset": 2048, 00:15:25.285 "data_size": 63488 00:15:25.285 }, 00:15:25.285 { 00:15:25.285 "name": "pt3", 00:15:25.285 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:25.285 "is_configured": true, 00:15:25.285 "data_offset": 2048, 00:15:25.285 "data_size": 63488 00:15:25.285 } 00:15:25.285 ] 00:15:25.285 }' 00:15:25.285 09:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.285 09:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.556 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:25.556 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:25.556 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:25.556 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:25.556 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:25.556 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:25.556 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:25.556 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:25.556 09:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.556 09:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.556 [2024-10-15 09:14:43.397344] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:25.556 09:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.556 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:25.556 "name": "raid_bdev1", 00:15:25.556 "aliases": [ 00:15:25.556 "1aa53286-d510-4f0e-a8f2-287860246b77" 00:15:25.556 ], 00:15:25.556 "product_name": "Raid Volume", 00:15:25.556 "block_size": 512, 00:15:25.556 "num_blocks": 126976, 00:15:25.556 "uuid": "1aa53286-d510-4f0e-a8f2-287860246b77", 00:15:25.556 "assigned_rate_limits": { 00:15:25.556 "rw_ios_per_sec": 0, 00:15:25.556 "rw_mbytes_per_sec": 0, 00:15:25.556 "r_mbytes_per_sec": 0, 00:15:25.556 "w_mbytes_per_sec": 0 00:15:25.556 }, 00:15:25.556 "claimed": false, 00:15:25.556 "zoned": false, 00:15:25.556 "supported_io_types": { 00:15:25.556 "read": true, 00:15:25.556 "write": true, 00:15:25.556 "unmap": false, 00:15:25.556 "flush": false, 00:15:25.556 "reset": true, 00:15:25.556 "nvme_admin": false, 00:15:25.556 "nvme_io": false, 00:15:25.556 "nvme_io_md": false, 00:15:25.556 "write_zeroes": true, 00:15:25.556 "zcopy": false, 00:15:25.556 "get_zone_info": false, 00:15:25.556 "zone_management": false, 00:15:25.556 "zone_append": false, 00:15:25.556 "compare": false, 00:15:25.556 "compare_and_write": false, 00:15:25.556 "abort": false, 00:15:25.556 "seek_hole": false, 00:15:25.556 "seek_data": false, 00:15:25.556 "copy": false, 00:15:25.556 "nvme_iov_md": false 00:15:25.556 }, 00:15:25.556 "driver_specific": { 00:15:25.556 "raid": { 00:15:25.556 "uuid": "1aa53286-d510-4f0e-a8f2-287860246b77", 00:15:25.556 "strip_size_kb": 64, 00:15:25.556 "state": "online", 00:15:25.556 "raid_level": "raid5f", 00:15:25.556 "superblock": true, 00:15:25.556 "num_base_bdevs": 3, 00:15:25.556 "num_base_bdevs_discovered": 3, 00:15:25.556 "num_base_bdevs_operational": 3, 00:15:25.556 "base_bdevs_list": [ 00:15:25.556 { 00:15:25.556 "name": "pt1", 00:15:25.556 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:25.556 "is_configured": true, 00:15:25.556 "data_offset": 2048, 00:15:25.556 "data_size": 63488 00:15:25.556 }, 00:15:25.556 { 00:15:25.556 "name": "pt2", 00:15:25.556 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:25.556 "is_configured": true, 00:15:25.556 "data_offset": 2048, 00:15:25.556 "data_size": 63488 00:15:25.556 }, 00:15:25.556 { 00:15:25.556 "name": "pt3", 00:15:25.556 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:25.556 "is_configured": true, 00:15:25.556 "data_offset": 2048, 00:15:25.556 "data_size": 63488 00:15:25.556 } 00:15:25.556 ] 00:15:25.556 } 00:15:25.556 } 00:15:25.556 }' 00:15:25.556 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:25.815 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:25.815 pt2 00:15:25.815 pt3' 00:15:25.815 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:25.815 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:25.815 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:25.815 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:25.815 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:25.815 09:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.815 09:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.815 09:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.815 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:25.815 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:25.815 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:25.815 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:25.815 09:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.815 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:25.815 09:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.815 09:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.815 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:25.815 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:25.815 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:25.816 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:25.816 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:25.816 09:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.816 09:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.816 09:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.816 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:25.816 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:25.816 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:25.816 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:25.816 09:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.816 09:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.816 [2024-10-15 09:14:43.660945] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:25.816 09:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.816 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1aa53286-d510-4f0e-a8f2-287860246b77 '!=' 1aa53286-d510-4f0e-a8f2-287860246b77 ']' 00:15:25.816 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:25.816 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:25.816 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:25.816 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:25.816 09:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.816 09:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.816 [2024-10-15 09:14:43.708739] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:26.074 09:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.074 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:26.074 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.074 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.074 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.074 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.074 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:26.074 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.074 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.074 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.074 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.074 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.074 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.074 09:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.074 09:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.074 09:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.074 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.074 "name": "raid_bdev1", 00:15:26.074 "uuid": "1aa53286-d510-4f0e-a8f2-287860246b77", 00:15:26.074 "strip_size_kb": 64, 00:15:26.074 "state": "online", 00:15:26.074 "raid_level": "raid5f", 00:15:26.074 "superblock": true, 00:15:26.074 "num_base_bdevs": 3, 00:15:26.074 "num_base_bdevs_discovered": 2, 00:15:26.074 "num_base_bdevs_operational": 2, 00:15:26.074 "base_bdevs_list": [ 00:15:26.074 { 00:15:26.074 "name": null, 00:15:26.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.074 "is_configured": false, 00:15:26.074 "data_offset": 0, 00:15:26.074 "data_size": 63488 00:15:26.074 }, 00:15:26.074 { 00:15:26.074 "name": "pt2", 00:15:26.074 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:26.074 "is_configured": true, 00:15:26.074 "data_offset": 2048, 00:15:26.074 "data_size": 63488 00:15:26.074 }, 00:15:26.074 { 00:15:26.074 "name": "pt3", 00:15:26.074 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:26.074 "is_configured": true, 00:15:26.074 "data_offset": 2048, 00:15:26.074 "data_size": 63488 00:15:26.074 } 00:15:26.074 ] 00:15:26.074 }' 00:15:26.074 09:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.074 09:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.333 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:26.333 09:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.333 09:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.333 [2024-10-15 09:14:44.151888] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:26.333 [2024-10-15 09:14:44.152037] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:26.333 [2024-10-15 09:14:44.152168] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:26.333 [2024-10-15 09:14:44.152269] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:26.333 [2024-10-15 09:14:44.152329] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:26.333 09:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.333 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.333 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:26.333 09:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.333 09:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.333 09:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.333 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:26.333 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:26.333 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:26.333 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:26.333 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:26.333 09:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.333 09:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.333 09:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.333 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:26.333 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:26.333 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:26.333 09:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.333 09:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.333 09:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.333 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:26.333 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:26.333 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:26.592 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:26.592 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:26.592 09:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.592 09:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.592 [2024-10-15 09:14:44.235730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:26.592 [2024-10-15 09:14:44.235912] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.592 [2024-10-15 09:14:44.235954] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:26.592 [2024-10-15 09:14:44.235994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.592 [2024-10-15 09:14:44.238598] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.592 [2024-10-15 09:14:44.238752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:26.592 [2024-10-15 09:14:44.238897] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:26.592 [2024-10-15 09:14:44.238985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:26.592 pt2 00:15:26.592 09:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.592 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:26.592 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.592 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.592 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.592 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.593 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:26.593 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.593 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.593 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.593 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.593 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.593 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.593 09:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.593 09:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.593 09:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.593 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.593 "name": "raid_bdev1", 00:15:26.593 "uuid": "1aa53286-d510-4f0e-a8f2-287860246b77", 00:15:26.593 "strip_size_kb": 64, 00:15:26.593 "state": "configuring", 00:15:26.593 "raid_level": "raid5f", 00:15:26.593 "superblock": true, 00:15:26.593 "num_base_bdevs": 3, 00:15:26.593 "num_base_bdevs_discovered": 1, 00:15:26.593 "num_base_bdevs_operational": 2, 00:15:26.593 "base_bdevs_list": [ 00:15:26.593 { 00:15:26.593 "name": null, 00:15:26.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.593 "is_configured": false, 00:15:26.593 "data_offset": 2048, 00:15:26.593 "data_size": 63488 00:15:26.593 }, 00:15:26.593 { 00:15:26.593 "name": "pt2", 00:15:26.593 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:26.593 "is_configured": true, 00:15:26.593 "data_offset": 2048, 00:15:26.593 "data_size": 63488 00:15:26.593 }, 00:15:26.593 { 00:15:26.593 "name": null, 00:15:26.593 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:26.593 "is_configured": false, 00:15:26.593 "data_offset": 2048, 00:15:26.593 "data_size": 63488 00:15:26.593 } 00:15:26.593 ] 00:15:26.593 }' 00:15:26.593 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.593 09:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.852 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:26.852 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:26.852 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:26.852 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:26.852 09:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.852 09:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.852 [2024-10-15 09:14:44.734887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:26.852 [2024-10-15 09:14:44.735099] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.852 [2024-10-15 09:14:44.735130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:26.852 [2024-10-15 09:14:44.735144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.852 [2024-10-15 09:14:44.735735] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.852 [2024-10-15 09:14:44.735767] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:26.852 [2024-10-15 09:14:44.735865] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:26.852 [2024-10-15 09:14:44.735907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:26.852 [2024-10-15 09:14:44.736048] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:26.852 [2024-10-15 09:14:44.736062] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:26.852 [2024-10-15 09:14:44.736370] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:26.852 pt3 00:15:26.852 [2024-10-15 09:14:44.742777] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:26.852 [2024-10-15 09:14:44.742811] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:26.852 [2024-10-15 09:14:44.743226] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.852 09:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.852 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:26.852 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.852 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.852 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.852 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.852 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:26.852 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.852 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.852 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.852 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.111 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.111 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.111 09:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.111 09:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.111 09:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.111 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.111 "name": "raid_bdev1", 00:15:27.111 "uuid": "1aa53286-d510-4f0e-a8f2-287860246b77", 00:15:27.111 "strip_size_kb": 64, 00:15:27.111 "state": "online", 00:15:27.111 "raid_level": "raid5f", 00:15:27.111 "superblock": true, 00:15:27.111 "num_base_bdevs": 3, 00:15:27.111 "num_base_bdevs_discovered": 2, 00:15:27.111 "num_base_bdevs_operational": 2, 00:15:27.111 "base_bdevs_list": [ 00:15:27.111 { 00:15:27.111 "name": null, 00:15:27.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.111 "is_configured": false, 00:15:27.111 "data_offset": 2048, 00:15:27.111 "data_size": 63488 00:15:27.111 }, 00:15:27.111 { 00:15:27.111 "name": "pt2", 00:15:27.111 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:27.111 "is_configured": true, 00:15:27.111 "data_offset": 2048, 00:15:27.111 "data_size": 63488 00:15:27.111 }, 00:15:27.111 { 00:15:27.111 "name": "pt3", 00:15:27.111 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:27.111 "is_configured": true, 00:15:27.111 "data_offset": 2048, 00:15:27.111 "data_size": 63488 00:15:27.111 } 00:15:27.111 ] 00:15:27.111 }' 00:15:27.111 09:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.112 09:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.370 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:27.370 09:14:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.370 09:14:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.370 [2024-10-15 09:14:45.219074] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:27.370 [2024-10-15 09:14:45.219235] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:27.370 [2024-10-15 09:14:45.219378] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:27.370 [2024-10-15 09:14:45.219500] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:27.370 [2024-10-15 09:14:45.219561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:27.370 09:14:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.370 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.370 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:27.370 09:14:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.370 09:14:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.370 09:14:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.629 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:27.629 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:27.629 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:27.629 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:27.629 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:27.629 09:14:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.629 09:14:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.629 09:14:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.629 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:27.629 09:14:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.629 09:14:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.629 [2024-10-15 09:14:45.314949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:27.629 [2024-10-15 09:14:45.315152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.629 [2024-10-15 09:14:45.315213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:27.629 [2024-10-15 09:14:45.315256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.629 [2024-10-15 09:14:45.318291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.629 [2024-10-15 09:14:45.318423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:27.629 [2024-10-15 09:14:45.318592] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:27.629 [2024-10-15 09:14:45.318708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:27.629 [2024-10-15 09:14:45.318953] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:27.629 [2024-10-15 09:14:45.319026] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:27.629 [2024-10-15 09:14:45.319090] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:27.629 [2024-10-15 09:14:45.319229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:27.629 pt1 00:15:27.629 09:14:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.629 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:27.629 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:27.629 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.629 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.629 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.629 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.629 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:27.629 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.629 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.629 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.629 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.629 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.629 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.629 09:14:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.629 09:14:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.629 09:14:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.629 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.629 "name": "raid_bdev1", 00:15:27.629 "uuid": "1aa53286-d510-4f0e-a8f2-287860246b77", 00:15:27.629 "strip_size_kb": 64, 00:15:27.629 "state": "configuring", 00:15:27.629 "raid_level": "raid5f", 00:15:27.629 "superblock": true, 00:15:27.629 "num_base_bdevs": 3, 00:15:27.629 "num_base_bdevs_discovered": 1, 00:15:27.629 "num_base_bdevs_operational": 2, 00:15:27.629 "base_bdevs_list": [ 00:15:27.629 { 00:15:27.629 "name": null, 00:15:27.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.629 "is_configured": false, 00:15:27.629 "data_offset": 2048, 00:15:27.629 "data_size": 63488 00:15:27.629 }, 00:15:27.629 { 00:15:27.629 "name": "pt2", 00:15:27.629 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:27.629 "is_configured": true, 00:15:27.629 "data_offset": 2048, 00:15:27.629 "data_size": 63488 00:15:27.629 }, 00:15:27.629 { 00:15:27.629 "name": null, 00:15:27.629 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:27.629 "is_configured": false, 00:15:27.629 "data_offset": 2048, 00:15:27.629 "data_size": 63488 00:15:27.629 } 00:15:27.629 ] 00:15:27.629 }' 00:15:27.629 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.629 09:14:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.198 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:28.198 09:14:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.198 09:14:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.198 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:28.198 09:14:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.198 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:28.198 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:28.198 09:14:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.198 09:14:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.198 [2024-10-15 09:14:45.862330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:28.198 [2024-10-15 09:14:45.862429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.198 [2024-10-15 09:14:45.862458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:28.198 [2024-10-15 09:14:45.862472] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.198 [2024-10-15 09:14:45.863118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.198 [2024-10-15 09:14:45.863160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:28.198 [2024-10-15 09:14:45.863271] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:28.198 [2024-10-15 09:14:45.863300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:28.198 [2024-10-15 09:14:45.863465] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:28.198 [2024-10-15 09:14:45.863477] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:28.198 [2024-10-15 09:14:45.863848] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:28.198 pt3 00:15:28.198 [2024-10-15 09:14:45.871559] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:28.198 [2024-10-15 09:14:45.871603] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:28.198 [2024-10-15 09:14:45.871980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.198 09:14:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.198 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:28.198 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.198 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.198 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.198 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.198 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:28.198 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.198 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.198 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.198 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.198 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.198 09:14:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.198 09:14:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.198 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.198 09:14:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.198 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.199 "name": "raid_bdev1", 00:15:28.199 "uuid": "1aa53286-d510-4f0e-a8f2-287860246b77", 00:15:28.199 "strip_size_kb": 64, 00:15:28.199 "state": "online", 00:15:28.199 "raid_level": "raid5f", 00:15:28.199 "superblock": true, 00:15:28.199 "num_base_bdevs": 3, 00:15:28.199 "num_base_bdevs_discovered": 2, 00:15:28.199 "num_base_bdevs_operational": 2, 00:15:28.199 "base_bdevs_list": [ 00:15:28.199 { 00:15:28.199 "name": null, 00:15:28.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.199 "is_configured": false, 00:15:28.199 "data_offset": 2048, 00:15:28.199 "data_size": 63488 00:15:28.199 }, 00:15:28.199 { 00:15:28.199 "name": "pt2", 00:15:28.199 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:28.199 "is_configured": true, 00:15:28.199 "data_offset": 2048, 00:15:28.199 "data_size": 63488 00:15:28.199 }, 00:15:28.199 { 00:15:28.199 "name": "pt3", 00:15:28.199 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:28.199 "is_configured": true, 00:15:28.199 "data_offset": 2048, 00:15:28.199 "data_size": 63488 00:15:28.199 } 00:15:28.199 ] 00:15:28.199 }' 00:15:28.199 09:14:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.199 09:14:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.457 09:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:28.457 09:14:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.457 09:14:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.457 09:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:28.716 09:14:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.716 09:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:28.716 09:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:28.716 09:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:28.716 09:14:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.716 09:14:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.716 [2024-10-15 09:14:46.400186] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:28.716 09:14:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.716 09:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 1aa53286-d510-4f0e-a8f2-287860246b77 '!=' 1aa53286-d510-4f0e-a8f2-287860246b77 ']' 00:15:28.716 09:14:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81382 00:15:28.716 09:14:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 81382 ']' 00:15:28.716 09:14:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 81382 00:15:28.716 09:14:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:15:28.716 09:14:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:28.716 09:14:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81382 00:15:28.716 09:14:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:28.716 killing process with pid 81382 00:15:28.716 09:14:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:28.716 09:14:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81382' 00:15:28.716 09:14:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 81382 00:15:28.716 [2024-10-15 09:14:46.465935] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:28.716 09:14:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 81382 00:15:28.716 [2024-10-15 09:14:46.466093] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:28.716 [2024-10-15 09:14:46.466175] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:28.716 [2024-10-15 09:14:46.466195] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:28.975 [2024-10-15 09:14:46.827607] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:30.352 ************************************ 00:15:30.352 END TEST raid5f_superblock_test 00:15:30.352 ************************************ 00:15:30.352 09:14:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:30.352 00:15:30.352 real 0m8.497s 00:15:30.352 user 0m13.075s 00:15:30.352 sys 0m1.610s 00:15:30.352 09:14:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:30.352 09:14:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.352 09:14:48 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:30.352 09:14:48 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:15:30.352 09:14:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:30.352 09:14:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:30.352 09:14:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:30.352 ************************************ 00:15:30.352 START TEST raid5f_rebuild_test 00:15:30.352 ************************************ 00:15:30.352 09:14:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:15:30.352 09:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:30.352 09:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:30.352 09:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:30.352 09:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:30.352 09:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:30.352 09:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:30.352 09:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:30.352 09:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:30.352 09:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:30.352 09:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:30.352 09:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:30.352 09:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:30.352 09:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:30.352 09:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:30.352 09:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:30.352 09:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:30.352 09:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:30.612 09:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:30.612 09:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:30.612 09:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:30.612 09:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:30.612 09:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:30.612 09:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:30.612 09:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:30.612 09:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:30.612 09:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:30.612 09:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:30.612 09:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:30.612 09:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81831 00:15:30.612 09:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:30.612 09:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81831 00:15:30.612 09:14:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 81831 ']' 00:15:30.612 09:14:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.612 09:14:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:30.612 09:14:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.612 09:14:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:30.612 09:14:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.612 [2024-10-15 09:14:48.348797] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:15:30.612 [2024-10-15 09:14:48.349029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:30.612 Zero copy mechanism will not be used. 00:15:30.612 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81831 ] 00:15:30.872 [2024-10-15 09:14:48.517194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.872 [2024-10-15 09:14:48.652982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.130 [2024-10-15 09:14:48.888981] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.130 [2024-10-15 09:14:48.889133] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.390 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:31.390 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:15:31.390 09:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:31.390 09:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:31.390 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.390 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.650 BaseBdev1_malloc 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.650 [2024-10-15 09:14:49.308735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:31.650 [2024-10-15 09:14:49.308938] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.650 [2024-10-15 09:14:49.308995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:31.650 [2024-10-15 09:14:49.309045] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.650 [2024-10-15 09:14:49.311906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.650 [2024-10-15 09:14:49.312022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:31.650 BaseBdev1 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.650 BaseBdev2_malloc 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.650 [2024-10-15 09:14:49.373816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:31.650 [2024-10-15 09:14:49.374017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.650 [2024-10-15 09:14:49.374075] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:31.650 [2024-10-15 09:14:49.374120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.650 [2024-10-15 09:14:49.377375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.650 BaseBdev2 00:15:31.650 [2024-10-15 09:14:49.377509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.650 BaseBdev3_malloc 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.650 [2024-10-15 09:14:49.463364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:31.650 [2024-10-15 09:14:49.463562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.650 [2024-10-15 09:14:49.463612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:31.650 [2024-10-15 09:14:49.463704] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.650 [2024-10-15 09:14:49.466926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.650 [2024-10-15 09:14:49.467059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:31.650 BaseBdev3 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.650 spare_malloc 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.650 spare_delay 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.650 [2024-10-15 09:14:49.530647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:31.650 [2024-10-15 09:14:49.530852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.650 [2024-10-15 09:14:49.530898] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:31.650 [2024-10-15 09:14:49.530935] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.650 [2024-10-15 09:14:49.534036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.650 [2024-10-15 09:14:49.534154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:31.650 spare 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.650 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.650 [2024-10-15 09:14:49.538961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:31.650 [2024-10-15 09:14:49.541698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:31.650 [2024-10-15 09:14:49.541848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:31.650 [2024-10-15 09:14:49.542013] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:31.650 [2024-10-15 09:14:49.542061] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:31.650 [2024-10-15 09:14:49.542450] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:31.910 [2024-10-15 09:14:49.549248] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:31.910 [2024-10-15 09:14:49.549349] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:31.910 [2024-10-15 09:14:49.549753] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.910 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.910 09:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:31.910 09:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:31.910 09:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.910 09:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.910 09:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.910 09:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:31.910 09:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.910 09:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.910 09:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.910 09:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.910 09:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.910 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.910 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.911 09:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.911 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.911 09:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.911 "name": "raid_bdev1", 00:15:31.911 "uuid": "aebecc50-db63-4b35-92ff-30b95240d7a4", 00:15:31.911 "strip_size_kb": 64, 00:15:31.911 "state": "online", 00:15:31.911 "raid_level": "raid5f", 00:15:31.911 "superblock": false, 00:15:31.911 "num_base_bdevs": 3, 00:15:31.911 "num_base_bdevs_discovered": 3, 00:15:31.911 "num_base_bdevs_operational": 3, 00:15:31.911 "base_bdevs_list": [ 00:15:31.911 { 00:15:31.911 "name": "BaseBdev1", 00:15:31.911 "uuid": "f02ef6f8-da62-5595-a193-aea70a4d86b2", 00:15:31.911 "is_configured": true, 00:15:31.911 "data_offset": 0, 00:15:31.911 "data_size": 65536 00:15:31.911 }, 00:15:31.911 { 00:15:31.911 "name": "BaseBdev2", 00:15:31.911 "uuid": "522176bd-dbe8-56b7-a92a-6ced92da0fa8", 00:15:31.911 "is_configured": true, 00:15:31.911 "data_offset": 0, 00:15:31.911 "data_size": 65536 00:15:31.911 }, 00:15:31.911 { 00:15:31.911 "name": "BaseBdev3", 00:15:31.911 "uuid": "102d8e6e-a9d5-5efe-9b45-30edc0c7333b", 00:15:31.911 "is_configured": true, 00:15:31.911 "data_offset": 0, 00:15:31.911 "data_size": 65536 00:15:31.911 } 00:15:31.911 ] 00:15:31.911 }' 00:15:31.911 09:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.911 09:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.170 09:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:32.170 09:14:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.170 09:14:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.170 09:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:32.170 [2024-10-15 09:14:50.057356] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:32.428 09:14:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.428 09:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:15:32.428 09:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.428 09:14:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.428 09:14:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.428 09:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:32.428 09:14:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.428 09:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:32.428 09:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:32.428 09:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:32.428 09:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:32.428 09:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:32.428 09:14:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:32.428 09:14:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:32.428 09:14:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:32.428 09:14:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:32.428 09:14:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:32.428 09:14:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:32.428 09:14:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:32.428 09:14:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:32.429 09:14:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:32.687 [2024-10-15 09:14:50.392659] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:32.687 /dev/nbd0 00:15:32.687 09:14:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:32.687 09:14:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:32.687 09:14:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:32.687 09:14:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:32.687 09:14:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:32.687 09:14:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:32.687 09:14:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:32.687 09:14:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:32.687 09:14:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:32.687 09:14:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:32.687 09:14:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:32.687 1+0 records in 00:15:32.687 1+0 records out 00:15:32.687 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361207 s, 11.3 MB/s 00:15:32.687 09:14:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.687 09:14:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:32.687 09:14:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.687 09:14:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:32.687 09:14:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:32.687 09:14:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:32.687 09:14:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:32.687 09:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:32.687 09:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:32.687 09:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:32.687 09:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:15:33.255 512+0 records in 00:15:33.255 512+0 records out 00:15:33.255 67108864 bytes (67 MB, 64 MiB) copied, 0.506652 s, 132 MB/s 00:15:33.255 09:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:33.255 09:14:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:33.255 09:14:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:33.255 09:14:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:33.255 09:14:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:33.255 09:14:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:33.255 09:14:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:33.514 [2024-10-15 09:14:51.260306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.514 09:14:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:33.514 09:14:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:33.514 09:14:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:33.514 09:14:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:33.514 09:14:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:33.514 09:14:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:33.514 09:14:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:33.514 09:14:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:33.514 09:14:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:33.514 09:14:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.514 09:14:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.514 [2024-10-15 09:14:51.289449] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:33.514 09:14:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.514 09:14:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:33.514 09:14:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.514 09:14:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.514 09:14:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.514 09:14:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.514 09:14:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:33.514 09:14:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.514 09:14:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.514 09:14:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.514 09:14:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.514 09:14:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.514 09:14:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.514 09:14:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.514 09:14:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.514 09:14:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.514 09:14:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.514 "name": "raid_bdev1", 00:15:33.514 "uuid": "aebecc50-db63-4b35-92ff-30b95240d7a4", 00:15:33.514 "strip_size_kb": 64, 00:15:33.514 "state": "online", 00:15:33.514 "raid_level": "raid5f", 00:15:33.514 "superblock": false, 00:15:33.514 "num_base_bdevs": 3, 00:15:33.514 "num_base_bdevs_discovered": 2, 00:15:33.514 "num_base_bdevs_operational": 2, 00:15:33.514 "base_bdevs_list": [ 00:15:33.514 { 00:15:33.514 "name": null, 00:15:33.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.514 "is_configured": false, 00:15:33.514 "data_offset": 0, 00:15:33.514 "data_size": 65536 00:15:33.514 }, 00:15:33.514 { 00:15:33.514 "name": "BaseBdev2", 00:15:33.514 "uuid": "522176bd-dbe8-56b7-a92a-6ced92da0fa8", 00:15:33.514 "is_configured": true, 00:15:33.514 "data_offset": 0, 00:15:33.514 "data_size": 65536 00:15:33.514 }, 00:15:33.514 { 00:15:33.514 "name": "BaseBdev3", 00:15:33.514 "uuid": "102d8e6e-a9d5-5efe-9b45-30edc0c7333b", 00:15:33.514 "is_configured": true, 00:15:33.514 "data_offset": 0, 00:15:33.514 "data_size": 65536 00:15:33.514 } 00:15:33.514 ] 00:15:33.514 }' 00:15:33.514 09:14:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.514 09:14:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.082 09:14:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:34.082 09:14:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.082 09:14:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.082 [2024-10-15 09:14:51.752781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:34.082 [2024-10-15 09:14:51.774010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:15:34.082 09:14:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.082 09:14:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:34.082 [2024-10-15 09:14:51.784054] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:35.020 09:14:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:35.020 09:14:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.020 09:14:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:35.020 09:14:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:35.020 09:14:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.020 09:14:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.020 09:14:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.020 09:14:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.020 09:14:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.020 09:14:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.020 09:14:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.020 "name": "raid_bdev1", 00:15:35.020 "uuid": "aebecc50-db63-4b35-92ff-30b95240d7a4", 00:15:35.020 "strip_size_kb": 64, 00:15:35.020 "state": "online", 00:15:35.020 "raid_level": "raid5f", 00:15:35.020 "superblock": false, 00:15:35.020 "num_base_bdevs": 3, 00:15:35.020 "num_base_bdevs_discovered": 3, 00:15:35.020 "num_base_bdevs_operational": 3, 00:15:35.020 "process": { 00:15:35.020 "type": "rebuild", 00:15:35.020 "target": "spare", 00:15:35.020 "progress": { 00:15:35.020 "blocks": 18432, 00:15:35.020 "percent": 14 00:15:35.020 } 00:15:35.020 }, 00:15:35.020 "base_bdevs_list": [ 00:15:35.020 { 00:15:35.020 "name": "spare", 00:15:35.020 "uuid": "466c9fc3-6ee3-561d-9f5a-2f75cb5260c0", 00:15:35.020 "is_configured": true, 00:15:35.020 "data_offset": 0, 00:15:35.020 "data_size": 65536 00:15:35.020 }, 00:15:35.020 { 00:15:35.020 "name": "BaseBdev2", 00:15:35.020 "uuid": "522176bd-dbe8-56b7-a92a-6ced92da0fa8", 00:15:35.020 "is_configured": true, 00:15:35.020 "data_offset": 0, 00:15:35.020 "data_size": 65536 00:15:35.020 }, 00:15:35.020 { 00:15:35.020 "name": "BaseBdev3", 00:15:35.020 "uuid": "102d8e6e-a9d5-5efe-9b45-30edc0c7333b", 00:15:35.020 "is_configured": true, 00:15:35.020 "data_offset": 0, 00:15:35.020 "data_size": 65536 00:15:35.020 } 00:15:35.020 ] 00:15:35.020 }' 00:15:35.020 09:14:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.020 09:14:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:35.020 09:14:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.280 09:14:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:35.280 09:14:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:35.280 09:14:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.280 09:14:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.280 [2024-10-15 09:14:52.924075] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:35.280 [2024-10-15 09:14:52.997593] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:35.280 [2024-10-15 09:14:52.997816] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.280 [2024-10-15 09:14:52.997848] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:35.280 [2024-10-15 09:14:52.997860] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:35.280 09:14:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.280 09:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:35.280 09:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.280 09:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.280 09:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.280 09:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.280 09:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:35.280 09:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.280 09:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.280 09:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.280 09:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.280 09:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.280 09:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.280 09:14:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.280 09:14:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.280 09:14:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.280 09:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.280 "name": "raid_bdev1", 00:15:35.280 "uuid": "aebecc50-db63-4b35-92ff-30b95240d7a4", 00:15:35.280 "strip_size_kb": 64, 00:15:35.280 "state": "online", 00:15:35.280 "raid_level": "raid5f", 00:15:35.280 "superblock": false, 00:15:35.280 "num_base_bdevs": 3, 00:15:35.280 "num_base_bdevs_discovered": 2, 00:15:35.280 "num_base_bdevs_operational": 2, 00:15:35.280 "base_bdevs_list": [ 00:15:35.280 { 00:15:35.280 "name": null, 00:15:35.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.280 "is_configured": false, 00:15:35.280 "data_offset": 0, 00:15:35.280 "data_size": 65536 00:15:35.280 }, 00:15:35.280 { 00:15:35.280 "name": "BaseBdev2", 00:15:35.280 "uuid": "522176bd-dbe8-56b7-a92a-6ced92da0fa8", 00:15:35.280 "is_configured": true, 00:15:35.280 "data_offset": 0, 00:15:35.280 "data_size": 65536 00:15:35.280 }, 00:15:35.280 { 00:15:35.280 "name": "BaseBdev3", 00:15:35.280 "uuid": "102d8e6e-a9d5-5efe-9b45-30edc0c7333b", 00:15:35.280 "is_configured": true, 00:15:35.280 "data_offset": 0, 00:15:35.280 "data_size": 65536 00:15:35.280 } 00:15:35.280 ] 00:15:35.280 }' 00:15:35.280 09:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.280 09:14:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.849 09:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:35.849 09:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.849 09:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:35.849 09:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:35.849 09:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.849 09:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.849 09:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.849 09:14:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.849 09:14:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.849 09:14:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.849 09:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.849 "name": "raid_bdev1", 00:15:35.849 "uuid": "aebecc50-db63-4b35-92ff-30b95240d7a4", 00:15:35.849 "strip_size_kb": 64, 00:15:35.849 "state": "online", 00:15:35.849 "raid_level": "raid5f", 00:15:35.849 "superblock": false, 00:15:35.849 "num_base_bdevs": 3, 00:15:35.849 "num_base_bdevs_discovered": 2, 00:15:35.849 "num_base_bdevs_operational": 2, 00:15:35.849 "base_bdevs_list": [ 00:15:35.849 { 00:15:35.849 "name": null, 00:15:35.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.849 "is_configured": false, 00:15:35.849 "data_offset": 0, 00:15:35.849 "data_size": 65536 00:15:35.849 }, 00:15:35.849 { 00:15:35.849 "name": "BaseBdev2", 00:15:35.849 "uuid": "522176bd-dbe8-56b7-a92a-6ced92da0fa8", 00:15:35.849 "is_configured": true, 00:15:35.849 "data_offset": 0, 00:15:35.849 "data_size": 65536 00:15:35.849 }, 00:15:35.849 { 00:15:35.849 "name": "BaseBdev3", 00:15:35.849 "uuid": "102d8e6e-a9d5-5efe-9b45-30edc0c7333b", 00:15:35.849 "is_configured": true, 00:15:35.849 "data_offset": 0, 00:15:35.849 "data_size": 65536 00:15:35.849 } 00:15:35.849 ] 00:15:35.849 }' 00:15:35.849 09:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.849 09:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:35.849 09:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.849 09:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:35.849 09:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:35.850 09:14:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.850 09:14:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.850 [2024-10-15 09:14:53.688231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:35.850 [2024-10-15 09:14:53.708744] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:15:35.850 09:14:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.850 09:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:35.850 [2024-10-15 09:14:53.719617] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:37.227 09:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.227 09:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.227 09:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.227 09:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.227 09:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.227 09:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.227 09:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.227 09:14:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.227 09:14:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.227 09:14:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.227 09:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.227 "name": "raid_bdev1", 00:15:37.227 "uuid": "aebecc50-db63-4b35-92ff-30b95240d7a4", 00:15:37.227 "strip_size_kb": 64, 00:15:37.227 "state": "online", 00:15:37.227 "raid_level": "raid5f", 00:15:37.227 "superblock": false, 00:15:37.227 "num_base_bdevs": 3, 00:15:37.227 "num_base_bdevs_discovered": 3, 00:15:37.227 "num_base_bdevs_operational": 3, 00:15:37.227 "process": { 00:15:37.227 "type": "rebuild", 00:15:37.227 "target": "spare", 00:15:37.227 "progress": { 00:15:37.227 "blocks": 18432, 00:15:37.227 "percent": 14 00:15:37.227 } 00:15:37.227 }, 00:15:37.227 "base_bdevs_list": [ 00:15:37.227 { 00:15:37.227 "name": "spare", 00:15:37.227 "uuid": "466c9fc3-6ee3-561d-9f5a-2f75cb5260c0", 00:15:37.227 "is_configured": true, 00:15:37.227 "data_offset": 0, 00:15:37.227 "data_size": 65536 00:15:37.227 }, 00:15:37.227 { 00:15:37.227 "name": "BaseBdev2", 00:15:37.227 "uuid": "522176bd-dbe8-56b7-a92a-6ced92da0fa8", 00:15:37.227 "is_configured": true, 00:15:37.227 "data_offset": 0, 00:15:37.227 "data_size": 65536 00:15:37.227 }, 00:15:37.227 { 00:15:37.227 "name": "BaseBdev3", 00:15:37.227 "uuid": "102d8e6e-a9d5-5efe-9b45-30edc0c7333b", 00:15:37.227 "is_configured": true, 00:15:37.227 "data_offset": 0, 00:15:37.227 "data_size": 65536 00:15:37.227 } 00:15:37.227 ] 00:15:37.227 }' 00:15:37.227 09:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.227 09:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:37.227 09:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.227 09:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:37.227 09:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:37.227 09:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:37.227 09:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:37.227 09:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=578 00:15:37.227 09:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:37.227 09:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.227 09:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.227 09:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.227 09:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.227 09:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.227 09:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.227 09:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.227 09:14:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.227 09:14:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.227 09:14:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.227 09:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.227 "name": "raid_bdev1", 00:15:37.227 "uuid": "aebecc50-db63-4b35-92ff-30b95240d7a4", 00:15:37.227 "strip_size_kb": 64, 00:15:37.227 "state": "online", 00:15:37.227 "raid_level": "raid5f", 00:15:37.227 "superblock": false, 00:15:37.227 "num_base_bdevs": 3, 00:15:37.227 "num_base_bdevs_discovered": 3, 00:15:37.227 "num_base_bdevs_operational": 3, 00:15:37.227 "process": { 00:15:37.227 "type": "rebuild", 00:15:37.227 "target": "spare", 00:15:37.227 "progress": { 00:15:37.227 "blocks": 22528, 00:15:37.227 "percent": 17 00:15:37.227 } 00:15:37.227 }, 00:15:37.227 "base_bdevs_list": [ 00:15:37.227 { 00:15:37.227 "name": "spare", 00:15:37.227 "uuid": "466c9fc3-6ee3-561d-9f5a-2f75cb5260c0", 00:15:37.227 "is_configured": true, 00:15:37.227 "data_offset": 0, 00:15:37.227 "data_size": 65536 00:15:37.227 }, 00:15:37.227 { 00:15:37.227 "name": "BaseBdev2", 00:15:37.227 "uuid": "522176bd-dbe8-56b7-a92a-6ced92da0fa8", 00:15:37.228 "is_configured": true, 00:15:37.228 "data_offset": 0, 00:15:37.228 "data_size": 65536 00:15:37.228 }, 00:15:37.228 { 00:15:37.228 "name": "BaseBdev3", 00:15:37.228 "uuid": "102d8e6e-a9d5-5efe-9b45-30edc0c7333b", 00:15:37.228 "is_configured": true, 00:15:37.228 "data_offset": 0, 00:15:37.228 "data_size": 65536 00:15:37.228 } 00:15:37.228 ] 00:15:37.228 }' 00:15:37.228 09:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.228 09:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:37.228 09:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.228 09:14:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:37.228 09:14:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:38.163 09:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:38.163 09:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:38.163 09:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.163 09:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:38.163 09:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:38.163 09:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.163 09:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.163 09:14:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.163 09:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.163 09:14:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.421 09:14:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.421 09:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.421 "name": "raid_bdev1", 00:15:38.421 "uuid": "aebecc50-db63-4b35-92ff-30b95240d7a4", 00:15:38.421 "strip_size_kb": 64, 00:15:38.421 "state": "online", 00:15:38.421 "raid_level": "raid5f", 00:15:38.421 "superblock": false, 00:15:38.421 "num_base_bdevs": 3, 00:15:38.421 "num_base_bdevs_discovered": 3, 00:15:38.421 "num_base_bdevs_operational": 3, 00:15:38.421 "process": { 00:15:38.421 "type": "rebuild", 00:15:38.421 "target": "spare", 00:15:38.421 "progress": { 00:15:38.421 "blocks": 47104, 00:15:38.421 "percent": 35 00:15:38.421 } 00:15:38.421 }, 00:15:38.421 "base_bdevs_list": [ 00:15:38.421 { 00:15:38.421 "name": "spare", 00:15:38.421 "uuid": "466c9fc3-6ee3-561d-9f5a-2f75cb5260c0", 00:15:38.421 "is_configured": true, 00:15:38.421 "data_offset": 0, 00:15:38.421 "data_size": 65536 00:15:38.421 }, 00:15:38.421 { 00:15:38.421 "name": "BaseBdev2", 00:15:38.421 "uuid": "522176bd-dbe8-56b7-a92a-6ced92da0fa8", 00:15:38.421 "is_configured": true, 00:15:38.421 "data_offset": 0, 00:15:38.421 "data_size": 65536 00:15:38.421 }, 00:15:38.421 { 00:15:38.421 "name": "BaseBdev3", 00:15:38.421 "uuid": "102d8e6e-a9d5-5efe-9b45-30edc0c7333b", 00:15:38.421 "is_configured": true, 00:15:38.421 "data_offset": 0, 00:15:38.421 "data_size": 65536 00:15:38.421 } 00:15:38.421 ] 00:15:38.421 }' 00:15:38.421 09:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.421 09:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:38.421 09:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.421 09:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:38.421 09:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:39.355 09:14:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:39.355 09:14:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.355 09:14:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.355 09:14:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:39.355 09:14:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:39.355 09:14:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.355 09:14:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.355 09:14:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.355 09:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.355 09:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.355 09:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.355 09:14:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.355 "name": "raid_bdev1", 00:15:39.355 "uuid": "aebecc50-db63-4b35-92ff-30b95240d7a4", 00:15:39.355 "strip_size_kb": 64, 00:15:39.355 "state": "online", 00:15:39.355 "raid_level": "raid5f", 00:15:39.355 "superblock": false, 00:15:39.355 "num_base_bdevs": 3, 00:15:39.355 "num_base_bdevs_discovered": 3, 00:15:39.355 "num_base_bdevs_operational": 3, 00:15:39.355 "process": { 00:15:39.355 "type": "rebuild", 00:15:39.355 "target": "spare", 00:15:39.355 "progress": { 00:15:39.355 "blocks": 69632, 00:15:39.355 "percent": 53 00:15:39.355 } 00:15:39.355 }, 00:15:39.355 "base_bdevs_list": [ 00:15:39.355 { 00:15:39.355 "name": "spare", 00:15:39.356 "uuid": "466c9fc3-6ee3-561d-9f5a-2f75cb5260c0", 00:15:39.356 "is_configured": true, 00:15:39.356 "data_offset": 0, 00:15:39.356 "data_size": 65536 00:15:39.356 }, 00:15:39.356 { 00:15:39.356 "name": "BaseBdev2", 00:15:39.356 "uuid": "522176bd-dbe8-56b7-a92a-6ced92da0fa8", 00:15:39.356 "is_configured": true, 00:15:39.356 "data_offset": 0, 00:15:39.356 "data_size": 65536 00:15:39.356 }, 00:15:39.356 { 00:15:39.356 "name": "BaseBdev3", 00:15:39.356 "uuid": "102d8e6e-a9d5-5efe-9b45-30edc0c7333b", 00:15:39.356 "is_configured": true, 00:15:39.356 "data_offset": 0, 00:15:39.356 "data_size": 65536 00:15:39.356 } 00:15:39.356 ] 00:15:39.356 }' 00:15:39.615 09:14:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.615 09:14:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:39.615 09:14:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.615 09:14:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:39.615 09:14:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:40.548 09:14:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:40.548 09:14:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:40.548 09:14:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.548 09:14:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:40.548 09:14:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:40.548 09:14:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.548 09:14:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.548 09:14:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.548 09:14:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.548 09:14:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.548 09:14:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.548 09:14:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.548 "name": "raid_bdev1", 00:15:40.548 "uuid": "aebecc50-db63-4b35-92ff-30b95240d7a4", 00:15:40.548 "strip_size_kb": 64, 00:15:40.548 "state": "online", 00:15:40.548 "raid_level": "raid5f", 00:15:40.548 "superblock": false, 00:15:40.548 "num_base_bdevs": 3, 00:15:40.548 "num_base_bdevs_discovered": 3, 00:15:40.548 "num_base_bdevs_operational": 3, 00:15:40.548 "process": { 00:15:40.548 "type": "rebuild", 00:15:40.548 "target": "spare", 00:15:40.548 "progress": { 00:15:40.548 "blocks": 94208, 00:15:40.548 "percent": 71 00:15:40.548 } 00:15:40.548 }, 00:15:40.548 "base_bdevs_list": [ 00:15:40.548 { 00:15:40.548 "name": "spare", 00:15:40.548 "uuid": "466c9fc3-6ee3-561d-9f5a-2f75cb5260c0", 00:15:40.548 "is_configured": true, 00:15:40.548 "data_offset": 0, 00:15:40.548 "data_size": 65536 00:15:40.548 }, 00:15:40.548 { 00:15:40.548 "name": "BaseBdev2", 00:15:40.548 "uuid": "522176bd-dbe8-56b7-a92a-6ced92da0fa8", 00:15:40.548 "is_configured": true, 00:15:40.548 "data_offset": 0, 00:15:40.548 "data_size": 65536 00:15:40.548 }, 00:15:40.548 { 00:15:40.548 "name": "BaseBdev3", 00:15:40.548 "uuid": "102d8e6e-a9d5-5efe-9b45-30edc0c7333b", 00:15:40.548 "is_configured": true, 00:15:40.548 "data_offset": 0, 00:15:40.548 "data_size": 65536 00:15:40.548 } 00:15:40.548 ] 00:15:40.548 }' 00:15:40.548 09:14:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.806 09:14:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:40.806 09:14:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.806 09:14:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:40.806 09:14:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:41.740 09:14:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:41.740 09:14:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:41.740 09:14:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.740 09:14:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:41.740 09:14:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:41.740 09:14:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.740 09:14:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.740 09:14:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.740 09:14:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.740 09:14:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.740 09:14:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.740 09:14:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.740 "name": "raid_bdev1", 00:15:41.740 "uuid": "aebecc50-db63-4b35-92ff-30b95240d7a4", 00:15:41.740 "strip_size_kb": 64, 00:15:41.740 "state": "online", 00:15:41.740 "raid_level": "raid5f", 00:15:41.740 "superblock": false, 00:15:41.740 "num_base_bdevs": 3, 00:15:41.740 "num_base_bdevs_discovered": 3, 00:15:41.740 "num_base_bdevs_operational": 3, 00:15:41.740 "process": { 00:15:41.740 "type": "rebuild", 00:15:41.740 "target": "spare", 00:15:41.740 "progress": { 00:15:41.740 "blocks": 116736, 00:15:41.740 "percent": 89 00:15:41.740 } 00:15:41.740 }, 00:15:41.740 "base_bdevs_list": [ 00:15:41.740 { 00:15:41.740 "name": "spare", 00:15:41.740 "uuid": "466c9fc3-6ee3-561d-9f5a-2f75cb5260c0", 00:15:41.740 "is_configured": true, 00:15:41.740 "data_offset": 0, 00:15:41.740 "data_size": 65536 00:15:41.740 }, 00:15:41.740 { 00:15:41.740 "name": "BaseBdev2", 00:15:41.740 "uuid": "522176bd-dbe8-56b7-a92a-6ced92da0fa8", 00:15:41.740 "is_configured": true, 00:15:41.740 "data_offset": 0, 00:15:41.740 "data_size": 65536 00:15:41.740 }, 00:15:41.740 { 00:15:41.740 "name": "BaseBdev3", 00:15:41.740 "uuid": "102d8e6e-a9d5-5efe-9b45-30edc0c7333b", 00:15:41.740 "is_configured": true, 00:15:41.740 "data_offset": 0, 00:15:41.740 "data_size": 65536 00:15:41.740 } 00:15:41.740 ] 00:15:41.740 }' 00:15:41.740 09:14:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.740 09:14:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:41.740 09:14:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.999 09:14:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:41.999 09:14:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:42.606 [2024-10-15 09:15:00.189470] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:42.607 [2024-10-15 09:15:00.189649] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:42.607 [2024-10-15 09:15:00.189735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.865 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:42.865 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.865 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.865 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.865 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.865 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.865 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.865 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.865 09:15:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.865 09:15:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.865 09:15:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.865 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.865 "name": "raid_bdev1", 00:15:42.865 "uuid": "aebecc50-db63-4b35-92ff-30b95240d7a4", 00:15:42.865 "strip_size_kb": 64, 00:15:42.865 "state": "online", 00:15:42.865 "raid_level": "raid5f", 00:15:42.865 "superblock": false, 00:15:42.865 "num_base_bdevs": 3, 00:15:42.865 "num_base_bdevs_discovered": 3, 00:15:42.865 "num_base_bdevs_operational": 3, 00:15:42.865 "base_bdevs_list": [ 00:15:42.865 { 00:15:42.865 "name": "spare", 00:15:42.865 "uuid": "466c9fc3-6ee3-561d-9f5a-2f75cb5260c0", 00:15:42.865 "is_configured": true, 00:15:42.865 "data_offset": 0, 00:15:42.865 "data_size": 65536 00:15:42.865 }, 00:15:42.865 { 00:15:42.865 "name": "BaseBdev2", 00:15:42.865 "uuid": "522176bd-dbe8-56b7-a92a-6ced92da0fa8", 00:15:42.865 "is_configured": true, 00:15:42.865 "data_offset": 0, 00:15:42.865 "data_size": 65536 00:15:42.865 }, 00:15:42.865 { 00:15:42.865 "name": "BaseBdev3", 00:15:42.865 "uuid": "102d8e6e-a9d5-5efe-9b45-30edc0c7333b", 00:15:42.865 "is_configured": true, 00:15:42.865 "data_offset": 0, 00:15:42.865 "data_size": 65536 00:15:42.865 } 00:15:42.865 ] 00:15:42.865 }' 00:15:42.865 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.865 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:42.865 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.124 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:43.124 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:43.124 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:43.124 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.124 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:43.124 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:43.124 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.124 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.124 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.124 09:15:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.124 09:15:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.124 09:15:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.124 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.124 "name": "raid_bdev1", 00:15:43.124 "uuid": "aebecc50-db63-4b35-92ff-30b95240d7a4", 00:15:43.124 "strip_size_kb": 64, 00:15:43.124 "state": "online", 00:15:43.124 "raid_level": "raid5f", 00:15:43.124 "superblock": false, 00:15:43.124 "num_base_bdevs": 3, 00:15:43.124 "num_base_bdevs_discovered": 3, 00:15:43.124 "num_base_bdevs_operational": 3, 00:15:43.124 "base_bdevs_list": [ 00:15:43.124 { 00:15:43.124 "name": "spare", 00:15:43.124 "uuid": "466c9fc3-6ee3-561d-9f5a-2f75cb5260c0", 00:15:43.124 "is_configured": true, 00:15:43.124 "data_offset": 0, 00:15:43.124 "data_size": 65536 00:15:43.124 }, 00:15:43.124 { 00:15:43.124 "name": "BaseBdev2", 00:15:43.124 "uuid": "522176bd-dbe8-56b7-a92a-6ced92da0fa8", 00:15:43.124 "is_configured": true, 00:15:43.124 "data_offset": 0, 00:15:43.124 "data_size": 65536 00:15:43.124 }, 00:15:43.124 { 00:15:43.124 "name": "BaseBdev3", 00:15:43.124 "uuid": "102d8e6e-a9d5-5efe-9b45-30edc0c7333b", 00:15:43.124 "is_configured": true, 00:15:43.124 "data_offset": 0, 00:15:43.124 "data_size": 65536 00:15:43.124 } 00:15:43.124 ] 00:15:43.124 }' 00:15:43.124 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.124 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:43.124 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.124 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:43.124 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:43.124 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.124 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.124 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.124 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.124 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:43.124 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.124 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.124 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.124 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.124 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.124 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.124 09:15:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.124 09:15:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.124 09:15:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.124 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.124 "name": "raid_bdev1", 00:15:43.124 "uuid": "aebecc50-db63-4b35-92ff-30b95240d7a4", 00:15:43.124 "strip_size_kb": 64, 00:15:43.124 "state": "online", 00:15:43.124 "raid_level": "raid5f", 00:15:43.124 "superblock": false, 00:15:43.124 "num_base_bdevs": 3, 00:15:43.124 "num_base_bdevs_discovered": 3, 00:15:43.124 "num_base_bdevs_operational": 3, 00:15:43.124 "base_bdevs_list": [ 00:15:43.124 { 00:15:43.124 "name": "spare", 00:15:43.124 "uuid": "466c9fc3-6ee3-561d-9f5a-2f75cb5260c0", 00:15:43.124 "is_configured": true, 00:15:43.124 "data_offset": 0, 00:15:43.124 "data_size": 65536 00:15:43.124 }, 00:15:43.124 { 00:15:43.124 "name": "BaseBdev2", 00:15:43.124 "uuid": "522176bd-dbe8-56b7-a92a-6ced92da0fa8", 00:15:43.124 "is_configured": true, 00:15:43.124 "data_offset": 0, 00:15:43.124 "data_size": 65536 00:15:43.124 }, 00:15:43.124 { 00:15:43.124 "name": "BaseBdev3", 00:15:43.124 "uuid": "102d8e6e-a9d5-5efe-9b45-30edc0c7333b", 00:15:43.124 "is_configured": true, 00:15:43.124 "data_offset": 0, 00:15:43.124 "data_size": 65536 00:15:43.124 } 00:15:43.124 ] 00:15:43.124 }' 00:15:43.124 09:15:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.125 09:15:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.690 09:15:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:43.690 09:15:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.690 09:15:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.690 [2024-10-15 09:15:01.449709] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:43.690 [2024-10-15 09:15:01.449763] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:43.690 [2024-10-15 09:15:01.449876] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:43.690 [2024-10-15 09:15:01.449978] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:43.690 [2024-10-15 09:15:01.449998] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:43.690 09:15:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.690 09:15:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.690 09:15:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.690 09:15:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.690 09:15:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:43.690 09:15:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.690 09:15:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:43.690 09:15:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:43.690 09:15:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:43.690 09:15:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:43.690 09:15:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:43.690 09:15:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:43.690 09:15:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:43.690 09:15:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:43.690 09:15:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:43.690 09:15:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:43.690 09:15:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:43.690 09:15:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:43.690 09:15:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:43.949 /dev/nbd0 00:15:43.949 09:15:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:43.949 09:15:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:43.949 09:15:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:43.949 09:15:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:43.949 09:15:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:43.949 09:15:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:43.949 09:15:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:43.949 09:15:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:43.949 09:15:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:43.949 09:15:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:43.949 09:15:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:43.949 1+0 records in 00:15:43.949 1+0 records out 00:15:43.949 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473464 s, 8.7 MB/s 00:15:43.949 09:15:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.949 09:15:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:43.949 09:15:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.949 09:15:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:43.949 09:15:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:43.949 09:15:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:43.949 09:15:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:43.949 09:15:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:44.208 /dev/nbd1 00:15:44.208 09:15:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:44.208 09:15:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:44.208 09:15:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:44.208 09:15:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:44.208 09:15:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:44.208 09:15:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:44.208 09:15:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:44.208 09:15:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:44.208 09:15:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:44.208 09:15:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:44.208 09:15:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:44.208 1+0 records in 00:15:44.208 1+0 records out 00:15:44.208 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490403 s, 8.4 MB/s 00:15:44.208 09:15:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.208 09:15:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:44.208 09:15:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.208 09:15:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:44.208 09:15:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:44.208 09:15:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:44.208 09:15:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:44.208 09:15:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:44.467 09:15:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:44.467 09:15:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:44.467 09:15:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:44.467 09:15:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:44.467 09:15:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:44.467 09:15:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:44.467 09:15:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:44.725 09:15:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:44.725 09:15:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:44.725 09:15:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:44.725 09:15:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:44.725 09:15:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:44.726 09:15:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:44.726 09:15:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:44.726 09:15:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:44.726 09:15:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:44.726 09:15:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:44.984 09:15:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:44.984 09:15:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:44.984 09:15:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:44.984 09:15:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:44.984 09:15:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:44.984 09:15:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:44.984 09:15:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:44.984 09:15:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:45.243 09:15:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:45.243 09:15:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81831 00:15:45.243 09:15:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 81831 ']' 00:15:45.243 09:15:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 81831 00:15:45.243 09:15:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:15:45.243 09:15:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:45.243 09:15:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81831 00:15:45.243 killing process with pid 81831 00:15:45.243 Received shutdown signal, test time was about 60.000000 seconds 00:15:45.243 00:15:45.243 Latency(us) 00:15:45.243 [2024-10-15T09:15:03.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.243 [2024-10-15T09:15:03.139Z] =================================================================================================================== 00:15:45.243 [2024-10-15T09:15:03.139Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:45.243 09:15:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:45.243 09:15:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:45.243 09:15:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81831' 00:15:45.243 09:15:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 81831 00:15:45.243 09:15:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 81831 00:15:45.243 [2024-10-15 09:15:02.919918] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:45.810 [2024-10-15 09:15:03.399520] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:47.189 09:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:47.189 00:15:47.189 real 0m16.467s 00:15:47.189 user 0m20.363s 00:15:47.189 sys 0m2.276s 00:15:47.189 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:47.189 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.189 ************************************ 00:15:47.189 END TEST raid5f_rebuild_test 00:15:47.189 ************************************ 00:15:47.189 09:15:04 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:15:47.189 09:15:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:47.189 09:15:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:47.189 09:15:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:47.189 ************************************ 00:15:47.189 START TEST raid5f_rebuild_test_sb 00:15:47.189 ************************************ 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82294 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82294 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 82294 ']' 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:47.190 09:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.190 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:47.190 Zero copy mechanism will not be used. 00:15:47.190 [2024-10-15 09:15:04.869081] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:15:47.190 [2024-10-15 09:15:04.869228] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82294 ] 00:15:47.190 [2024-10-15 09:15:05.042157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.450 [2024-10-15 09:15:05.176672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.709 [2024-10-15 09:15:05.406612] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:47.709 [2024-10-15 09:15:05.406662] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:47.969 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:47.969 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:47.969 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:47.969 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:47.969 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.969 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.969 BaseBdev1_malloc 00:15:47.969 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.969 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:47.969 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.969 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.969 [2024-10-15 09:15:05.805002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:47.969 [2024-10-15 09:15:05.805098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.969 [2024-10-15 09:15:05.805131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:47.969 [2024-10-15 09:15:05.805152] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.969 [2024-10-15 09:15:05.808033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.969 [2024-10-15 09:15:05.808084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:47.969 BaseBdev1 00:15:47.969 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.969 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:47.969 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:47.969 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.969 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.969 BaseBdev2_malloc 00:15:47.969 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.969 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:47.969 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.969 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.229 [2024-10-15 09:15:05.865351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:48.229 [2024-10-15 09:15:05.865432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.229 [2024-10-15 09:15:05.865458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:48.229 [2024-10-15 09:15:05.865470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.229 [2024-10-15 09:15:05.868105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.229 [2024-10-15 09:15:05.868151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:48.229 BaseBdev2 00:15:48.229 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.229 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:48.229 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:48.229 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.229 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.229 BaseBdev3_malloc 00:15:48.229 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.229 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:48.229 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.229 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.229 [2024-10-15 09:15:05.936123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:48.229 [2024-10-15 09:15:05.936192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.229 [2024-10-15 09:15:05.936219] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:48.229 [2024-10-15 09:15:05.936232] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.229 [2024-10-15 09:15:05.938762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.229 [2024-10-15 09:15:05.938810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:48.229 BaseBdev3 00:15:48.229 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.229 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:48.229 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.229 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.229 spare_malloc 00:15:48.229 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.229 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:48.229 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.229 09:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.229 spare_delay 00:15:48.229 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.229 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:48.229 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.229 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.229 [2024-10-15 09:15:06.010178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:48.229 [2024-10-15 09:15:06.010247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.229 [2024-10-15 09:15:06.010272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:48.229 [2024-10-15 09:15:06.010285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.229 [2024-10-15 09:15:06.012864] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.229 [2024-10-15 09:15:06.012909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:48.229 spare 00:15:48.229 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.229 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:48.229 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.229 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.229 [2024-10-15 09:15:06.022262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:48.229 [2024-10-15 09:15:06.024279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:48.229 [2024-10-15 09:15:06.024358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:48.229 [2024-10-15 09:15:06.024545] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:48.229 [2024-10-15 09:15:06.024566] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:48.229 [2024-10-15 09:15:06.024914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:48.229 [2024-10-15 09:15:06.031584] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:48.229 [2024-10-15 09:15:06.031615] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:48.229 [2024-10-15 09:15:06.031886] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.229 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.229 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:48.229 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.229 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.229 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.229 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.229 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.229 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.229 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.229 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.229 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.229 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.229 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.229 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.229 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.229 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.229 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.229 "name": "raid_bdev1", 00:15:48.229 "uuid": "5297107a-e12f-4f5b-a087-a3476ecba265", 00:15:48.229 "strip_size_kb": 64, 00:15:48.229 "state": "online", 00:15:48.229 "raid_level": "raid5f", 00:15:48.229 "superblock": true, 00:15:48.229 "num_base_bdevs": 3, 00:15:48.229 "num_base_bdevs_discovered": 3, 00:15:48.229 "num_base_bdevs_operational": 3, 00:15:48.229 "base_bdevs_list": [ 00:15:48.229 { 00:15:48.229 "name": "BaseBdev1", 00:15:48.229 "uuid": "cc8ae8ca-0502-5d0d-a993-effa242a93b8", 00:15:48.229 "is_configured": true, 00:15:48.229 "data_offset": 2048, 00:15:48.229 "data_size": 63488 00:15:48.229 }, 00:15:48.229 { 00:15:48.229 "name": "BaseBdev2", 00:15:48.229 "uuid": "c9589d22-6530-5a38-8665-07f2b838f46b", 00:15:48.229 "is_configured": true, 00:15:48.229 "data_offset": 2048, 00:15:48.229 "data_size": 63488 00:15:48.229 }, 00:15:48.229 { 00:15:48.229 "name": "BaseBdev3", 00:15:48.229 "uuid": "da0fb08e-748f-5589-a7eb-139e53ba1e71", 00:15:48.229 "is_configured": true, 00:15:48.229 "data_offset": 2048, 00:15:48.229 "data_size": 63488 00:15:48.229 } 00:15:48.229 ] 00:15:48.229 }' 00:15:48.229 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.229 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.801 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:48.801 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.801 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:48.801 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.801 [2024-10-15 09:15:06.490797] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:48.801 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.801 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:15:48.801 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.801 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.801 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.801 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:48.801 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.801 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:48.801 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:48.801 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:48.801 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:48.801 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:48.801 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:48.801 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:48.801 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:48.801 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:48.801 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:48.801 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:48.801 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:48.801 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:48.801 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:49.060 [2024-10-15 09:15:06.798086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:49.060 /dev/nbd0 00:15:49.060 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:49.060 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:49.060 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:49.060 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:49.060 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:49.060 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:49.060 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:49.060 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:49.060 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:49.060 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:49.061 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:49.061 1+0 records in 00:15:49.061 1+0 records out 00:15:49.061 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000523901 s, 7.8 MB/s 00:15:49.061 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:49.061 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:49.061 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:49.061 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:49.061 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:49.061 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:49.061 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:49.061 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:49.061 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:49.061 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:49.061 09:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:15:49.629 496+0 records in 00:15:49.629 496+0 records out 00:15:49.629 65011712 bytes (65 MB, 62 MiB) copied, 0.418426 s, 155 MB/s 00:15:49.629 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:49.629 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:49.629 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:49.629 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:49.629 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:49.629 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:49.629 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:49.889 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:49.889 [2024-10-15 09:15:07.535082] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.889 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:49.890 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:49.890 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:49.890 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:49.890 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:49.890 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:49.890 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:49.890 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:49.890 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.890 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.890 [2024-10-15 09:15:07.559596] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:49.890 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.890 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:49.890 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.890 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.890 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.890 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.890 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:49.890 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.890 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.890 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.890 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.890 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.890 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.890 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.890 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.890 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.890 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.890 "name": "raid_bdev1", 00:15:49.890 "uuid": "5297107a-e12f-4f5b-a087-a3476ecba265", 00:15:49.890 "strip_size_kb": 64, 00:15:49.890 "state": "online", 00:15:49.890 "raid_level": "raid5f", 00:15:49.890 "superblock": true, 00:15:49.890 "num_base_bdevs": 3, 00:15:49.890 "num_base_bdevs_discovered": 2, 00:15:49.890 "num_base_bdevs_operational": 2, 00:15:49.890 "base_bdevs_list": [ 00:15:49.890 { 00:15:49.890 "name": null, 00:15:49.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.890 "is_configured": false, 00:15:49.890 "data_offset": 0, 00:15:49.890 "data_size": 63488 00:15:49.890 }, 00:15:49.890 { 00:15:49.890 "name": "BaseBdev2", 00:15:49.890 "uuid": "c9589d22-6530-5a38-8665-07f2b838f46b", 00:15:49.890 "is_configured": true, 00:15:49.890 "data_offset": 2048, 00:15:49.890 "data_size": 63488 00:15:49.890 }, 00:15:49.890 { 00:15:49.890 "name": "BaseBdev3", 00:15:49.890 "uuid": "da0fb08e-748f-5589-a7eb-139e53ba1e71", 00:15:49.890 "is_configured": true, 00:15:49.890 "data_offset": 2048, 00:15:49.890 "data_size": 63488 00:15:49.890 } 00:15:49.890 ] 00:15:49.890 }' 00:15:49.890 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.890 09:15:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.458 09:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:50.458 09:15:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.458 09:15:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.458 [2024-10-15 09:15:08.090837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:50.458 [2024-10-15 09:15:08.109651] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:15:50.458 09:15:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.458 09:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:50.458 [2024-10-15 09:15:08.118524] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:51.397 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:51.397 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.397 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:51.397 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:51.397 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.397 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.397 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.397 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.397 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.397 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.397 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.397 "name": "raid_bdev1", 00:15:51.397 "uuid": "5297107a-e12f-4f5b-a087-a3476ecba265", 00:15:51.397 "strip_size_kb": 64, 00:15:51.397 "state": "online", 00:15:51.397 "raid_level": "raid5f", 00:15:51.397 "superblock": true, 00:15:51.397 "num_base_bdevs": 3, 00:15:51.397 "num_base_bdevs_discovered": 3, 00:15:51.397 "num_base_bdevs_operational": 3, 00:15:51.397 "process": { 00:15:51.397 "type": "rebuild", 00:15:51.397 "target": "spare", 00:15:51.397 "progress": { 00:15:51.397 "blocks": 20480, 00:15:51.397 "percent": 16 00:15:51.397 } 00:15:51.397 }, 00:15:51.397 "base_bdevs_list": [ 00:15:51.397 { 00:15:51.397 "name": "spare", 00:15:51.397 "uuid": "745a3b48-2ae1-51d3-a5a9-3052919d7b43", 00:15:51.397 "is_configured": true, 00:15:51.397 "data_offset": 2048, 00:15:51.397 "data_size": 63488 00:15:51.397 }, 00:15:51.397 { 00:15:51.397 "name": "BaseBdev2", 00:15:51.397 "uuid": "c9589d22-6530-5a38-8665-07f2b838f46b", 00:15:51.397 "is_configured": true, 00:15:51.397 "data_offset": 2048, 00:15:51.397 "data_size": 63488 00:15:51.397 }, 00:15:51.397 { 00:15:51.397 "name": "BaseBdev3", 00:15:51.397 "uuid": "da0fb08e-748f-5589-a7eb-139e53ba1e71", 00:15:51.397 "is_configured": true, 00:15:51.397 "data_offset": 2048, 00:15:51.397 "data_size": 63488 00:15:51.397 } 00:15:51.397 ] 00:15:51.397 }' 00:15:51.397 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.397 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:51.398 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.398 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:51.398 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:51.398 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.398 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.398 [2024-10-15 09:15:09.282697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:51.657 [2024-10-15 09:15:09.330933] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:51.657 [2024-10-15 09:15:09.331019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.657 [2024-10-15 09:15:09.331042] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:51.657 [2024-10-15 09:15:09.331051] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:51.657 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.657 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:51.657 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.657 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.657 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.657 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.657 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:51.657 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.657 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.657 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.657 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.657 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.657 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.657 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.657 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.657 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.657 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.657 "name": "raid_bdev1", 00:15:51.657 "uuid": "5297107a-e12f-4f5b-a087-a3476ecba265", 00:15:51.657 "strip_size_kb": 64, 00:15:51.657 "state": "online", 00:15:51.657 "raid_level": "raid5f", 00:15:51.657 "superblock": true, 00:15:51.657 "num_base_bdevs": 3, 00:15:51.657 "num_base_bdevs_discovered": 2, 00:15:51.657 "num_base_bdevs_operational": 2, 00:15:51.657 "base_bdevs_list": [ 00:15:51.657 { 00:15:51.657 "name": null, 00:15:51.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.657 "is_configured": false, 00:15:51.657 "data_offset": 0, 00:15:51.657 "data_size": 63488 00:15:51.657 }, 00:15:51.657 { 00:15:51.657 "name": "BaseBdev2", 00:15:51.657 "uuid": "c9589d22-6530-5a38-8665-07f2b838f46b", 00:15:51.657 "is_configured": true, 00:15:51.657 "data_offset": 2048, 00:15:51.657 "data_size": 63488 00:15:51.657 }, 00:15:51.657 { 00:15:51.657 "name": "BaseBdev3", 00:15:51.657 "uuid": "da0fb08e-748f-5589-a7eb-139e53ba1e71", 00:15:51.657 "is_configured": true, 00:15:51.657 "data_offset": 2048, 00:15:51.657 "data_size": 63488 00:15:51.657 } 00:15:51.657 ] 00:15:51.657 }' 00:15:51.657 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.657 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.226 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:52.226 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.226 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:52.226 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:52.226 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.226 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.226 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.226 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.226 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.226 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.226 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.226 "name": "raid_bdev1", 00:15:52.226 "uuid": "5297107a-e12f-4f5b-a087-a3476ecba265", 00:15:52.226 "strip_size_kb": 64, 00:15:52.226 "state": "online", 00:15:52.226 "raid_level": "raid5f", 00:15:52.226 "superblock": true, 00:15:52.226 "num_base_bdevs": 3, 00:15:52.226 "num_base_bdevs_discovered": 2, 00:15:52.226 "num_base_bdevs_operational": 2, 00:15:52.226 "base_bdevs_list": [ 00:15:52.226 { 00:15:52.226 "name": null, 00:15:52.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.226 "is_configured": false, 00:15:52.226 "data_offset": 0, 00:15:52.226 "data_size": 63488 00:15:52.226 }, 00:15:52.226 { 00:15:52.226 "name": "BaseBdev2", 00:15:52.226 "uuid": "c9589d22-6530-5a38-8665-07f2b838f46b", 00:15:52.226 "is_configured": true, 00:15:52.226 "data_offset": 2048, 00:15:52.226 "data_size": 63488 00:15:52.226 }, 00:15:52.226 { 00:15:52.226 "name": "BaseBdev3", 00:15:52.226 "uuid": "da0fb08e-748f-5589-a7eb-139e53ba1e71", 00:15:52.226 "is_configured": true, 00:15:52.226 "data_offset": 2048, 00:15:52.226 "data_size": 63488 00:15:52.226 } 00:15:52.226 ] 00:15:52.226 }' 00:15:52.226 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.226 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:52.226 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.226 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:52.226 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:52.226 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.226 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.226 [2024-10-15 09:15:09.979938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:52.226 [2024-10-15 09:15:09.999904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:15:52.226 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.226 09:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:52.226 [2024-10-15 09:15:10.009410] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:53.165 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:53.165 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.165 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:53.165 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:53.165 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.165 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.165 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.165 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.165 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.165 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.424 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.424 "name": "raid_bdev1", 00:15:53.424 "uuid": "5297107a-e12f-4f5b-a087-a3476ecba265", 00:15:53.424 "strip_size_kb": 64, 00:15:53.424 "state": "online", 00:15:53.424 "raid_level": "raid5f", 00:15:53.424 "superblock": true, 00:15:53.424 "num_base_bdevs": 3, 00:15:53.424 "num_base_bdevs_discovered": 3, 00:15:53.424 "num_base_bdevs_operational": 3, 00:15:53.424 "process": { 00:15:53.424 "type": "rebuild", 00:15:53.424 "target": "spare", 00:15:53.424 "progress": { 00:15:53.424 "blocks": 20480, 00:15:53.424 "percent": 16 00:15:53.424 } 00:15:53.424 }, 00:15:53.424 "base_bdevs_list": [ 00:15:53.424 { 00:15:53.424 "name": "spare", 00:15:53.424 "uuid": "745a3b48-2ae1-51d3-a5a9-3052919d7b43", 00:15:53.424 "is_configured": true, 00:15:53.424 "data_offset": 2048, 00:15:53.424 "data_size": 63488 00:15:53.424 }, 00:15:53.424 { 00:15:53.424 "name": "BaseBdev2", 00:15:53.424 "uuid": "c9589d22-6530-5a38-8665-07f2b838f46b", 00:15:53.424 "is_configured": true, 00:15:53.424 "data_offset": 2048, 00:15:53.424 "data_size": 63488 00:15:53.424 }, 00:15:53.424 { 00:15:53.424 "name": "BaseBdev3", 00:15:53.424 "uuid": "da0fb08e-748f-5589-a7eb-139e53ba1e71", 00:15:53.424 "is_configured": true, 00:15:53.424 "data_offset": 2048, 00:15:53.424 "data_size": 63488 00:15:53.424 } 00:15:53.424 ] 00:15:53.424 }' 00:15:53.424 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.424 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:53.424 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.424 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:53.424 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:53.424 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:53.424 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:53.424 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:53.424 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:53.424 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=595 00:15:53.424 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:53.424 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:53.424 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.424 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:53.424 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:53.424 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.424 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.424 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.424 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.424 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.424 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.424 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.424 "name": "raid_bdev1", 00:15:53.424 "uuid": "5297107a-e12f-4f5b-a087-a3476ecba265", 00:15:53.424 "strip_size_kb": 64, 00:15:53.424 "state": "online", 00:15:53.424 "raid_level": "raid5f", 00:15:53.424 "superblock": true, 00:15:53.424 "num_base_bdevs": 3, 00:15:53.424 "num_base_bdevs_discovered": 3, 00:15:53.424 "num_base_bdevs_operational": 3, 00:15:53.424 "process": { 00:15:53.424 "type": "rebuild", 00:15:53.424 "target": "spare", 00:15:53.424 "progress": { 00:15:53.424 "blocks": 22528, 00:15:53.424 "percent": 17 00:15:53.424 } 00:15:53.424 }, 00:15:53.424 "base_bdevs_list": [ 00:15:53.424 { 00:15:53.424 "name": "spare", 00:15:53.424 "uuid": "745a3b48-2ae1-51d3-a5a9-3052919d7b43", 00:15:53.424 "is_configured": true, 00:15:53.424 "data_offset": 2048, 00:15:53.424 "data_size": 63488 00:15:53.424 }, 00:15:53.424 { 00:15:53.424 "name": "BaseBdev2", 00:15:53.424 "uuid": "c9589d22-6530-5a38-8665-07f2b838f46b", 00:15:53.424 "is_configured": true, 00:15:53.424 "data_offset": 2048, 00:15:53.424 "data_size": 63488 00:15:53.424 }, 00:15:53.424 { 00:15:53.424 "name": "BaseBdev3", 00:15:53.424 "uuid": "da0fb08e-748f-5589-a7eb-139e53ba1e71", 00:15:53.424 "is_configured": true, 00:15:53.424 "data_offset": 2048, 00:15:53.424 "data_size": 63488 00:15:53.424 } 00:15:53.424 ] 00:15:53.424 }' 00:15:53.424 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.424 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:53.424 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.424 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:53.424 09:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:54.803 09:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:54.803 09:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:54.803 09:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.803 09:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:54.803 09:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:54.803 09:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.803 09:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.803 09:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.803 09:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.803 09:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.803 09:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.803 09:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.803 "name": "raid_bdev1", 00:15:54.803 "uuid": "5297107a-e12f-4f5b-a087-a3476ecba265", 00:15:54.803 "strip_size_kb": 64, 00:15:54.803 "state": "online", 00:15:54.803 "raid_level": "raid5f", 00:15:54.803 "superblock": true, 00:15:54.803 "num_base_bdevs": 3, 00:15:54.803 "num_base_bdevs_discovered": 3, 00:15:54.803 "num_base_bdevs_operational": 3, 00:15:54.803 "process": { 00:15:54.803 "type": "rebuild", 00:15:54.803 "target": "spare", 00:15:54.803 "progress": { 00:15:54.803 "blocks": 45056, 00:15:54.803 "percent": 35 00:15:54.803 } 00:15:54.803 }, 00:15:54.803 "base_bdevs_list": [ 00:15:54.803 { 00:15:54.803 "name": "spare", 00:15:54.803 "uuid": "745a3b48-2ae1-51d3-a5a9-3052919d7b43", 00:15:54.803 "is_configured": true, 00:15:54.803 "data_offset": 2048, 00:15:54.803 "data_size": 63488 00:15:54.803 }, 00:15:54.803 { 00:15:54.803 "name": "BaseBdev2", 00:15:54.803 "uuid": "c9589d22-6530-5a38-8665-07f2b838f46b", 00:15:54.803 "is_configured": true, 00:15:54.803 "data_offset": 2048, 00:15:54.803 "data_size": 63488 00:15:54.803 }, 00:15:54.803 { 00:15:54.803 "name": "BaseBdev3", 00:15:54.803 "uuid": "da0fb08e-748f-5589-a7eb-139e53ba1e71", 00:15:54.803 "is_configured": true, 00:15:54.803 "data_offset": 2048, 00:15:54.803 "data_size": 63488 00:15:54.803 } 00:15:54.803 ] 00:15:54.803 }' 00:15:54.803 09:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.803 09:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:54.803 09:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.803 09:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:54.803 09:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:55.740 09:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:55.740 09:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:55.740 09:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.740 09:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:55.740 09:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:55.740 09:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.740 09:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.740 09:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.740 09:15:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.740 09:15:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.740 09:15:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.740 09:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.740 "name": "raid_bdev1", 00:15:55.740 "uuid": "5297107a-e12f-4f5b-a087-a3476ecba265", 00:15:55.740 "strip_size_kb": 64, 00:15:55.740 "state": "online", 00:15:55.740 "raid_level": "raid5f", 00:15:55.740 "superblock": true, 00:15:55.740 "num_base_bdevs": 3, 00:15:55.740 "num_base_bdevs_discovered": 3, 00:15:55.740 "num_base_bdevs_operational": 3, 00:15:55.740 "process": { 00:15:55.740 "type": "rebuild", 00:15:55.740 "target": "spare", 00:15:55.740 "progress": { 00:15:55.740 "blocks": 69632, 00:15:55.740 "percent": 54 00:15:55.740 } 00:15:55.740 }, 00:15:55.740 "base_bdevs_list": [ 00:15:55.740 { 00:15:55.740 "name": "spare", 00:15:55.740 "uuid": "745a3b48-2ae1-51d3-a5a9-3052919d7b43", 00:15:55.740 "is_configured": true, 00:15:55.740 "data_offset": 2048, 00:15:55.740 "data_size": 63488 00:15:55.740 }, 00:15:55.740 { 00:15:55.740 "name": "BaseBdev2", 00:15:55.740 "uuid": "c9589d22-6530-5a38-8665-07f2b838f46b", 00:15:55.740 "is_configured": true, 00:15:55.740 "data_offset": 2048, 00:15:55.740 "data_size": 63488 00:15:55.740 }, 00:15:55.740 { 00:15:55.740 "name": "BaseBdev3", 00:15:55.740 "uuid": "da0fb08e-748f-5589-a7eb-139e53ba1e71", 00:15:55.740 "is_configured": true, 00:15:55.740 "data_offset": 2048, 00:15:55.740 "data_size": 63488 00:15:55.740 } 00:15:55.740 ] 00:15:55.740 }' 00:15:55.740 09:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.740 09:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:55.740 09:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.740 09:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:55.740 09:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:57.119 09:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:57.119 09:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:57.119 09:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.119 09:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:57.119 09:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:57.119 09:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.119 09:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.119 09:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.119 09:15:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.119 09:15:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.119 09:15:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.119 09:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.119 "name": "raid_bdev1", 00:15:57.119 "uuid": "5297107a-e12f-4f5b-a087-a3476ecba265", 00:15:57.119 "strip_size_kb": 64, 00:15:57.119 "state": "online", 00:15:57.119 "raid_level": "raid5f", 00:15:57.119 "superblock": true, 00:15:57.119 "num_base_bdevs": 3, 00:15:57.119 "num_base_bdevs_discovered": 3, 00:15:57.119 "num_base_bdevs_operational": 3, 00:15:57.119 "process": { 00:15:57.119 "type": "rebuild", 00:15:57.119 "target": "spare", 00:15:57.119 "progress": { 00:15:57.119 "blocks": 92160, 00:15:57.119 "percent": 72 00:15:57.119 } 00:15:57.119 }, 00:15:57.119 "base_bdevs_list": [ 00:15:57.119 { 00:15:57.119 "name": "spare", 00:15:57.119 "uuid": "745a3b48-2ae1-51d3-a5a9-3052919d7b43", 00:15:57.119 "is_configured": true, 00:15:57.119 "data_offset": 2048, 00:15:57.119 "data_size": 63488 00:15:57.119 }, 00:15:57.119 { 00:15:57.119 "name": "BaseBdev2", 00:15:57.119 "uuid": "c9589d22-6530-5a38-8665-07f2b838f46b", 00:15:57.119 "is_configured": true, 00:15:57.119 "data_offset": 2048, 00:15:57.119 "data_size": 63488 00:15:57.119 }, 00:15:57.119 { 00:15:57.119 "name": "BaseBdev3", 00:15:57.119 "uuid": "da0fb08e-748f-5589-a7eb-139e53ba1e71", 00:15:57.119 "is_configured": true, 00:15:57.119 "data_offset": 2048, 00:15:57.119 "data_size": 63488 00:15:57.119 } 00:15:57.119 ] 00:15:57.119 }' 00:15:57.119 09:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.119 09:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:57.119 09:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.119 09:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:57.119 09:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:58.056 09:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:58.056 09:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.056 09:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.056 09:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.056 09:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.056 09:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.056 09:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.056 09:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.056 09:15:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.056 09:15:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.056 09:15:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.056 09:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.056 "name": "raid_bdev1", 00:15:58.056 "uuid": "5297107a-e12f-4f5b-a087-a3476ecba265", 00:15:58.056 "strip_size_kb": 64, 00:15:58.056 "state": "online", 00:15:58.056 "raid_level": "raid5f", 00:15:58.056 "superblock": true, 00:15:58.056 "num_base_bdevs": 3, 00:15:58.056 "num_base_bdevs_discovered": 3, 00:15:58.056 "num_base_bdevs_operational": 3, 00:15:58.056 "process": { 00:15:58.056 "type": "rebuild", 00:15:58.056 "target": "spare", 00:15:58.056 "progress": { 00:15:58.056 "blocks": 116736, 00:15:58.056 "percent": 91 00:15:58.056 } 00:15:58.056 }, 00:15:58.056 "base_bdevs_list": [ 00:15:58.056 { 00:15:58.056 "name": "spare", 00:15:58.056 "uuid": "745a3b48-2ae1-51d3-a5a9-3052919d7b43", 00:15:58.056 "is_configured": true, 00:15:58.056 "data_offset": 2048, 00:15:58.056 "data_size": 63488 00:15:58.056 }, 00:15:58.056 { 00:15:58.056 "name": "BaseBdev2", 00:15:58.057 "uuid": "c9589d22-6530-5a38-8665-07f2b838f46b", 00:15:58.057 "is_configured": true, 00:15:58.057 "data_offset": 2048, 00:15:58.057 "data_size": 63488 00:15:58.057 }, 00:15:58.057 { 00:15:58.057 "name": "BaseBdev3", 00:15:58.057 "uuid": "da0fb08e-748f-5589-a7eb-139e53ba1e71", 00:15:58.057 "is_configured": true, 00:15:58.057 "data_offset": 2048, 00:15:58.057 "data_size": 63488 00:15:58.057 } 00:15:58.057 ] 00:15:58.057 }' 00:15:58.057 09:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.057 09:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.057 09:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.057 09:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.057 09:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:58.626 [2024-10-15 09:15:16.275416] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:58.626 [2024-10-15 09:15:16.275533] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:58.626 [2024-10-15 09:15:16.275720] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.196 09:15:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:59.196 09:15:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:59.196 09:15:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.196 09:15:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:59.196 09:15:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:59.196 09:15:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.196 09:15:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.196 09:15:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.196 09:15:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.196 09:15:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.196 09:15:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.196 09:15:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.196 "name": "raid_bdev1", 00:15:59.196 "uuid": "5297107a-e12f-4f5b-a087-a3476ecba265", 00:15:59.196 "strip_size_kb": 64, 00:15:59.196 "state": "online", 00:15:59.196 "raid_level": "raid5f", 00:15:59.196 "superblock": true, 00:15:59.196 "num_base_bdevs": 3, 00:15:59.196 "num_base_bdevs_discovered": 3, 00:15:59.196 "num_base_bdevs_operational": 3, 00:15:59.196 "base_bdevs_list": [ 00:15:59.196 { 00:15:59.196 "name": "spare", 00:15:59.197 "uuid": "745a3b48-2ae1-51d3-a5a9-3052919d7b43", 00:15:59.197 "is_configured": true, 00:15:59.197 "data_offset": 2048, 00:15:59.197 "data_size": 63488 00:15:59.197 }, 00:15:59.197 { 00:15:59.197 "name": "BaseBdev2", 00:15:59.197 "uuid": "c9589d22-6530-5a38-8665-07f2b838f46b", 00:15:59.197 "is_configured": true, 00:15:59.197 "data_offset": 2048, 00:15:59.197 "data_size": 63488 00:15:59.197 }, 00:15:59.197 { 00:15:59.197 "name": "BaseBdev3", 00:15:59.197 "uuid": "da0fb08e-748f-5589-a7eb-139e53ba1e71", 00:15:59.197 "is_configured": true, 00:15:59.197 "data_offset": 2048, 00:15:59.197 "data_size": 63488 00:15:59.197 } 00:15:59.197 ] 00:15:59.197 }' 00:15:59.197 09:15:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.197 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:59.197 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.197 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:59.197 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:59.197 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:59.197 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.197 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:59.197 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:59.197 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.197 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.197 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.197 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.197 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.457 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.457 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.457 "name": "raid_bdev1", 00:15:59.457 "uuid": "5297107a-e12f-4f5b-a087-a3476ecba265", 00:15:59.457 "strip_size_kb": 64, 00:15:59.457 "state": "online", 00:15:59.457 "raid_level": "raid5f", 00:15:59.457 "superblock": true, 00:15:59.457 "num_base_bdevs": 3, 00:15:59.457 "num_base_bdevs_discovered": 3, 00:15:59.457 "num_base_bdevs_operational": 3, 00:15:59.457 "base_bdevs_list": [ 00:15:59.457 { 00:15:59.457 "name": "spare", 00:15:59.457 "uuid": "745a3b48-2ae1-51d3-a5a9-3052919d7b43", 00:15:59.457 "is_configured": true, 00:15:59.457 "data_offset": 2048, 00:15:59.457 "data_size": 63488 00:15:59.457 }, 00:15:59.457 { 00:15:59.457 "name": "BaseBdev2", 00:15:59.457 "uuid": "c9589d22-6530-5a38-8665-07f2b838f46b", 00:15:59.457 "is_configured": true, 00:15:59.457 "data_offset": 2048, 00:15:59.457 "data_size": 63488 00:15:59.457 }, 00:15:59.457 { 00:15:59.457 "name": "BaseBdev3", 00:15:59.457 "uuid": "da0fb08e-748f-5589-a7eb-139e53ba1e71", 00:15:59.457 "is_configured": true, 00:15:59.457 "data_offset": 2048, 00:15:59.457 "data_size": 63488 00:15:59.457 } 00:15:59.457 ] 00:15:59.457 }' 00:15:59.457 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.457 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:59.457 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.457 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:59.457 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:59.457 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.457 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.457 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.457 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.457 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:59.457 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.457 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.457 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.457 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.457 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.457 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.457 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.457 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.457 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.457 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.457 "name": "raid_bdev1", 00:15:59.457 "uuid": "5297107a-e12f-4f5b-a087-a3476ecba265", 00:15:59.457 "strip_size_kb": 64, 00:15:59.457 "state": "online", 00:15:59.457 "raid_level": "raid5f", 00:15:59.457 "superblock": true, 00:15:59.457 "num_base_bdevs": 3, 00:15:59.457 "num_base_bdevs_discovered": 3, 00:15:59.457 "num_base_bdevs_operational": 3, 00:15:59.457 "base_bdevs_list": [ 00:15:59.457 { 00:15:59.457 "name": "spare", 00:15:59.457 "uuid": "745a3b48-2ae1-51d3-a5a9-3052919d7b43", 00:15:59.457 "is_configured": true, 00:15:59.457 "data_offset": 2048, 00:15:59.457 "data_size": 63488 00:15:59.457 }, 00:15:59.457 { 00:15:59.457 "name": "BaseBdev2", 00:15:59.457 "uuid": "c9589d22-6530-5a38-8665-07f2b838f46b", 00:15:59.457 "is_configured": true, 00:15:59.457 "data_offset": 2048, 00:15:59.457 "data_size": 63488 00:15:59.457 }, 00:15:59.457 { 00:15:59.457 "name": "BaseBdev3", 00:15:59.457 "uuid": "da0fb08e-748f-5589-a7eb-139e53ba1e71", 00:15:59.457 "is_configured": true, 00:15:59.457 "data_offset": 2048, 00:15:59.457 "data_size": 63488 00:15:59.457 } 00:15:59.457 ] 00:15:59.457 }' 00:15:59.457 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.457 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.026 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:00.026 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.026 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.026 [2024-10-15 09:15:17.722269] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:00.026 [2024-10-15 09:15:17.722312] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:00.026 [2024-10-15 09:15:17.722425] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.026 [2024-10-15 09:15:17.722529] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:00.026 [2024-10-15 09:15:17.722548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:00.026 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.026 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.026 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:00.026 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.026 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.026 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.026 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:00.026 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:00.026 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:00.026 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:00.026 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:00.026 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:00.026 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:00.026 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:00.026 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:00.026 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:00.026 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:00.026 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:00.026 09:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:00.292 /dev/nbd0 00:16:00.292 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:00.292 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:00.292 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:00.292 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:00.292 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:00.292 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:00.292 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:00.292 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:00.292 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:00.292 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:00.292 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:00.292 1+0 records in 00:16:00.292 1+0 records out 00:16:00.292 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039554 s, 10.4 MB/s 00:16:00.292 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.292 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:00.292 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.292 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:00.292 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:00.292 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:00.292 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:00.293 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:00.551 /dev/nbd1 00:16:00.551 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:00.551 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:00.551 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:00.551 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:00.551 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:00.551 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:00.551 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:00.551 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:00.551 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:00.551 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:00.551 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:00.551 1+0 records in 00:16:00.551 1+0 records out 00:16:00.551 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459802 s, 8.9 MB/s 00:16:00.551 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.551 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:00.551 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.551 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:00.551 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:00.551 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:00.551 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:00.551 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:00.811 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:00.811 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:00.811 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:00.811 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:00.811 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:00.811 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:00.811 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:01.070 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:01.070 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:01.070 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:01.070 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:01.070 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:01.070 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:01.070 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:01.070 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:01.070 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:01.070 09:15:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:01.331 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:01.331 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:01.331 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:01.331 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:01.331 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:01.331 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:01.331 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:01.331 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:01.331 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:01.331 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:01.331 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.331 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.331 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.331 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:01.331 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.331 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.331 [2024-10-15 09:15:19.105943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:01.331 [2024-10-15 09:15:19.106026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.331 [2024-10-15 09:15:19.106054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:01.331 [2024-10-15 09:15:19.106068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.331 [2024-10-15 09:15:19.108883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.331 [2024-10-15 09:15:19.108934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:01.331 [2024-10-15 09:15:19.109051] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:01.331 [2024-10-15 09:15:19.109136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:01.331 [2024-10-15 09:15:19.109314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:01.331 [2024-10-15 09:15:19.109435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:01.331 spare 00:16:01.331 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.331 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:01.331 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.331 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.331 [2024-10-15 09:15:19.209360] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:01.331 [2024-10-15 09:15:19.209415] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:01.331 [2024-10-15 09:15:19.209911] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:16:01.331 [2024-10-15 09:15:19.217424] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:01.331 [2024-10-15 09:15:19.217458] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:01.331 [2024-10-15 09:15:19.217813] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.331 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.331 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:01.591 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.591 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.591 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.591 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.591 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:01.591 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.591 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.591 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.591 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.591 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.591 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.591 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.591 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.591 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.591 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.591 "name": "raid_bdev1", 00:16:01.591 "uuid": "5297107a-e12f-4f5b-a087-a3476ecba265", 00:16:01.591 "strip_size_kb": 64, 00:16:01.591 "state": "online", 00:16:01.591 "raid_level": "raid5f", 00:16:01.591 "superblock": true, 00:16:01.591 "num_base_bdevs": 3, 00:16:01.591 "num_base_bdevs_discovered": 3, 00:16:01.591 "num_base_bdevs_operational": 3, 00:16:01.591 "base_bdevs_list": [ 00:16:01.591 { 00:16:01.591 "name": "spare", 00:16:01.591 "uuid": "745a3b48-2ae1-51d3-a5a9-3052919d7b43", 00:16:01.591 "is_configured": true, 00:16:01.591 "data_offset": 2048, 00:16:01.591 "data_size": 63488 00:16:01.591 }, 00:16:01.591 { 00:16:01.591 "name": "BaseBdev2", 00:16:01.591 "uuid": "c9589d22-6530-5a38-8665-07f2b838f46b", 00:16:01.591 "is_configured": true, 00:16:01.591 "data_offset": 2048, 00:16:01.591 "data_size": 63488 00:16:01.591 }, 00:16:01.591 { 00:16:01.591 "name": "BaseBdev3", 00:16:01.591 "uuid": "da0fb08e-748f-5589-a7eb-139e53ba1e71", 00:16:01.591 "is_configured": true, 00:16:01.591 "data_offset": 2048, 00:16:01.591 "data_size": 63488 00:16:01.591 } 00:16:01.591 ] 00:16:01.591 }' 00:16:01.591 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.591 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.851 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:01.851 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.851 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:01.851 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:01.851 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.851 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.851 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.851 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.851 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.851 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.851 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.851 "name": "raid_bdev1", 00:16:01.851 "uuid": "5297107a-e12f-4f5b-a087-a3476ecba265", 00:16:01.851 "strip_size_kb": 64, 00:16:01.851 "state": "online", 00:16:01.851 "raid_level": "raid5f", 00:16:01.851 "superblock": true, 00:16:01.851 "num_base_bdevs": 3, 00:16:01.851 "num_base_bdevs_discovered": 3, 00:16:01.851 "num_base_bdevs_operational": 3, 00:16:01.851 "base_bdevs_list": [ 00:16:01.851 { 00:16:01.851 "name": "spare", 00:16:01.851 "uuid": "745a3b48-2ae1-51d3-a5a9-3052919d7b43", 00:16:01.851 "is_configured": true, 00:16:01.851 "data_offset": 2048, 00:16:01.851 "data_size": 63488 00:16:01.851 }, 00:16:01.851 { 00:16:01.851 "name": "BaseBdev2", 00:16:01.851 "uuid": "c9589d22-6530-5a38-8665-07f2b838f46b", 00:16:01.851 "is_configured": true, 00:16:01.851 "data_offset": 2048, 00:16:01.851 "data_size": 63488 00:16:01.851 }, 00:16:01.851 { 00:16:01.851 "name": "BaseBdev3", 00:16:01.851 "uuid": "da0fb08e-748f-5589-a7eb-139e53ba1e71", 00:16:01.851 "is_configured": true, 00:16:01.851 "data_offset": 2048, 00:16:01.851 "data_size": 63488 00:16:01.851 } 00:16:01.851 ] 00:16:01.851 }' 00:16:01.851 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.111 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:02.111 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.111 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:02.111 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.111 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:02.111 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.111 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.111 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.111 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:02.111 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:02.111 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.111 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.111 [2024-10-15 09:15:19.869806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:02.111 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.111 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:02.111 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.111 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.111 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.111 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.111 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:02.111 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.111 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.111 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.111 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.111 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.111 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.111 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.111 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.111 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.111 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.111 "name": "raid_bdev1", 00:16:02.111 "uuid": "5297107a-e12f-4f5b-a087-a3476ecba265", 00:16:02.111 "strip_size_kb": 64, 00:16:02.111 "state": "online", 00:16:02.111 "raid_level": "raid5f", 00:16:02.111 "superblock": true, 00:16:02.111 "num_base_bdevs": 3, 00:16:02.111 "num_base_bdevs_discovered": 2, 00:16:02.111 "num_base_bdevs_operational": 2, 00:16:02.111 "base_bdevs_list": [ 00:16:02.111 { 00:16:02.111 "name": null, 00:16:02.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.111 "is_configured": false, 00:16:02.111 "data_offset": 0, 00:16:02.111 "data_size": 63488 00:16:02.111 }, 00:16:02.111 { 00:16:02.111 "name": "BaseBdev2", 00:16:02.111 "uuid": "c9589d22-6530-5a38-8665-07f2b838f46b", 00:16:02.111 "is_configured": true, 00:16:02.111 "data_offset": 2048, 00:16:02.111 "data_size": 63488 00:16:02.111 }, 00:16:02.111 { 00:16:02.111 "name": "BaseBdev3", 00:16:02.111 "uuid": "da0fb08e-748f-5589-a7eb-139e53ba1e71", 00:16:02.111 "is_configured": true, 00:16:02.111 "data_offset": 2048, 00:16:02.111 "data_size": 63488 00:16:02.111 } 00:16:02.111 ] 00:16:02.111 }' 00:16:02.111 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.111 09:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.679 09:15:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:02.679 09:15:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.679 09:15:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.679 [2024-10-15 09:15:20.345657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:02.679 [2024-10-15 09:15:20.345937] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:02.679 [2024-10-15 09:15:20.345973] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:02.679 [2024-10-15 09:15:20.346017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:02.679 [2024-10-15 09:15:20.365802] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:16:02.679 09:15:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.679 09:15:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:02.679 [2024-10-15 09:15:20.375851] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:03.618 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:03.618 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.618 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:03.618 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:03.618 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.618 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.618 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.618 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.618 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.618 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.618 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.618 "name": "raid_bdev1", 00:16:03.618 "uuid": "5297107a-e12f-4f5b-a087-a3476ecba265", 00:16:03.618 "strip_size_kb": 64, 00:16:03.618 "state": "online", 00:16:03.618 "raid_level": "raid5f", 00:16:03.618 "superblock": true, 00:16:03.618 "num_base_bdevs": 3, 00:16:03.618 "num_base_bdevs_discovered": 3, 00:16:03.618 "num_base_bdevs_operational": 3, 00:16:03.618 "process": { 00:16:03.618 "type": "rebuild", 00:16:03.618 "target": "spare", 00:16:03.618 "progress": { 00:16:03.618 "blocks": 20480, 00:16:03.618 "percent": 16 00:16:03.618 } 00:16:03.618 }, 00:16:03.618 "base_bdevs_list": [ 00:16:03.618 { 00:16:03.618 "name": "spare", 00:16:03.618 "uuid": "745a3b48-2ae1-51d3-a5a9-3052919d7b43", 00:16:03.618 "is_configured": true, 00:16:03.618 "data_offset": 2048, 00:16:03.618 "data_size": 63488 00:16:03.618 }, 00:16:03.618 { 00:16:03.618 "name": "BaseBdev2", 00:16:03.618 "uuid": "c9589d22-6530-5a38-8665-07f2b838f46b", 00:16:03.618 "is_configured": true, 00:16:03.618 "data_offset": 2048, 00:16:03.618 "data_size": 63488 00:16:03.618 }, 00:16:03.618 { 00:16:03.618 "name": "BaseBdev3", 00:16:03.618 "uuid": "da0fb08e-748f-5589-a7eb-139e53ba1e71", 00:16:03.618 "is_configured": true, 00:16:03.618 "data_offset": 2048, 00:16:03.618 "data_size": 63488 00:16:03.618 } 00:16:03.618 ] 00:16:03.618 }' 00:16:03.618 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.618 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.618 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.877 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.877 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:03.877 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.877 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.877 [2024-10-15 09:15:21.528052] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:03.877 [2024-10-15 09:15:21.589002] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:03.877 [2024-10-15 09:15:21.589089] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.877 [2024-10-15 09:15:21.589112] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:03.877 [2024-10-15 09:15:21.589123] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:03.877 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.877 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:03.877 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.877 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.877 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.877 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.877 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:03.877 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.877 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.877 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.877 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.877 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.877 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.877 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.877 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.877 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.877 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.877 "name": "raid_bdev1", 00:16:03.877 "uuid": "5297107a-e12f-4f5b-a087-a3476ecba265", 00:16:03.877 "strip_size_kb": 64, 00:16:03.877 "state": "online", 00:16:03.877 "raid_level": "raid5f", 00:16:03.877 "superblock": true, 00:16:03.877 "num_base_bdevs": 3, 00:16:03.877 "num_base_bdevs_discovered": 2, 00:16:03.877 "num_base_bdevs_operational": 2, 00:16:03.877 "base_bdevs_list": [ 00:16:03.877 { 00:16:03.877 "name": null, 00:16:03.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.877 "is_configured": false, 00:16:03.877 "data_offset": 0, 00:16:03.877 "data_size": 63488 00:16:03.877 }, 00:16:03.877 { 00:16:03.877 "name": "BaseBdev2", 00:16:03.877 "uuid": "c9589d22-6530-5a38-8665-07f2b838f46b", 00:16:03.877 "is_configured": true, 00:16:03.877 "data_offset": 2048, 00:16:03.877 "data_size": 63488 00:16:03.877 }, 00:16:03.877 { 00:16:03.877 "name": "BaseBdev3", 00:16:03.877 "uuid": "da0fb08e-748f-5589-a7eb-139e53ba1e71", 00:16:03.877 "is_configured": true, 00:16:03.877 "data_offset": 2048, 00:16:03.877 "data_size": 63488 00:16:03.877 } 00:16:03.877 ] 00:16:03.877 }' 00:16:03.877 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.877 09:15:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.446 09:15:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:04.446 09:15:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.446 09:15:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.446 [2024-10-15 09:15:22.094907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:04.446 [2024-10-15 09:15:22.095002] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.446 [2024-10-15 09:15:22.095029] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:16:04.446 [2024-10-15 09:15:22.095046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.446 [2024-10-15 09:15:22.095608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.446 [2024-10-15 09:15:22.095641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:04.446 [2024-10-15 09:15:22.095784] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:04.446 [2024-10-15 09:15:22.095805] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:04.446 [2024-10-15 09:15:22.095818] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:04.446 [2024-10-15 09:15:22.095848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:04.446 [2024-10-15 09:15:22.114041] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:16:04.446 spare 00:16:04.446 09:15:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.446 09:15:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:04.446 [2024-10-15 09:15:22.122651] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:05.398 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:05.399 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.399 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:05.399 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:05.399 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.399 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.399 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.399 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.399 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.399 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.399 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.399 "name": "raid_bdev1", 00:16:05.399 "uuid": "5297107a-e12f-4f5b-a087-a3476ecba265", 00:16:05.399 "strip_size_kb": 64, 00:16:05.399 "state": "online", 00:16:05.399 "raid_level": "raid5f", 00:16:05.399 "superblock": true, 00:16:05.399 "num_base_bdevs": 3, 00:16:05.399 "num_base_bdevs_discovered": 3, 00:16:05.399 "num_base_bdevs_operational": 3, 00:16:05.399 "process": { 00:16:05.399 "type": "rebuild", 00:16:05.399 "target": "spare", 00:16:05.399 "progress": { 00:16:05.399 "blocks": 20480, 00:16:05.399 "percent": 16 00:16:05.399 } 00:16:05.399 }, 00:16:05.399 "base_bdevs_list": [ 00:16:05.399 { 00:16:05.399 "name": "spare", 00:16:05.399 "uuid": "745a3b48-2ae1-51d3-a5a9-3052919d7b43", 00:16:05.399 "is_configured": true, 00:16:05.399 "data_offset": 2048, 00:16:05.399 "data_size": 63488 00:16:05.399 }, 00:16:05.399 { 00:16:05.399 "name": "BaseBdev2", 00:16:05.399 "uuid": "c9589d22-6530-5a38-8665-07f2b838f46b", 00:16:05.399 "is_configured": true, 00:16:05.399 "data_offset": 2048, 00:16:05.399 "data_size": 63488 00:16:05.399 }, 00:16:05.399 { 00:16:05.399 "name": "BaseBdev3", 00:16:05.399 "uuid": "da0fb08e-748f-5589-a7eb-139e53ba1e71", 00:16:05.399 "is_configured": true, 00:16:05.399 "data_offset": 2048, 00:16:05.399 "data_size": 63488 00:16:05.399 } 00:16:05.399 ] 00:16:05.399 }' 00:16:05.399 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.399 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:05.399 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.399 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:05.399 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:05.399 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.399 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.399 [2024-10-15 09:15:23.263188] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:05.659 [2024-10-15 09:15:23.335309] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:05.659 [2024-10-15 09:15:23.335399] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.659 [2024-10-15 09:15:23.335423] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:05.659 [2024-10-15 09:15:23.335433] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:05.659 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.659 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:05.659 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.659 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.659 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.659 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.659 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:05.659 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.659 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.659 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.659 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.659 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.659 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.659 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.659 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.659 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.659 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.659 "name": "raid_bdev1", 00:16:05.659 "uuid": "5297107a-e12f-4f5b-a087-a3476ecba265", 00:16:05.659 "strip_size_kb": 64, 00:16:05.659 "state": "online", 00:16:05.659 "raid_level": "raid5f", 00:16:05.659 "superblock": true, 00:16:05.659 "num_base_bdevs": 3, 00:16:05.659 "num_base_bdevs_discovered": 2, 00:16:05.659 "num_base_bdevs_operational": 2, 00:16:05.659 "base_bdevs_list": [ 00:16:05.659 { 00:16:05.659 "name": null, 00:16:05.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.659 "is_configured": false, 00:16:05.659 "data_offset": 0, 00:16:05.659 "data_size": 63488 00:16:05.659 }, 00:16:05.659 { 00:16:05.659 "name": "BaseBdev2", 00:16:05.659 "uuid": "c9589d22-6530-5a38-8665-07f2b838f46b", 00:16:05.659 "is_configured": true, 00:16:05.659 "data_offset": 2048, 00:16:05.659 "data_size": 63488 00:16:05.659 }, 00:16:05.659 { 00:16:05.659 "name": "BaseBdev3", 00:16:05.659 "uuid": "da0fb08e-748f-5589-a7eb-139e53ba1e71", 00:16:05.659 "is_configured": true, 00:16:05.659 "data_offset": 2048, 00:16:05.659 "data_size": 63488 00:16:05.659 } 00:16:05.659 ] 00:16:05.659 }' 00:16:05.659 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.659 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.228 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:06.228 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.228 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:06.228 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:06.228 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.228 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.228 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.228 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.228 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.228 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.228 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.228 "name": "raid_bdev1", 00:16:06.228 "uuid": "5297107a-e12f-4f5b-a087-a3476ecba265", 00:16:06.228 "strip_size_kb": 64, 00:16:06.228 "state": "online", 00:16:06.228 "raid_level": "raid5f", 00:16:06.228 "superblock": true, 00:16:06.228 "num_base_bdevs": 3, 00:16:06.228 "num_base_bdevs_discovered": 2, 00:16:06.228 "num_base_bdevs_operational": 2, 00:16:06.228 "base_bdevs_list": [ 00:16:06.228 { 00:16:06.228 "name": null, 00:16:06.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.228 "is_configured": false, 00:16:06.228 "data_offset": 0, 00:16:06.228 "data_size": 63488 00:16:06.228 }, 00:16:06.228 { 00:16:06.228 "name": "BaseBdev2", 00:16:06.228 "uuid": "c9589d22-6530-5a38-8665-07f2b838f46b", 00:16:06.228 "is_configured": true, 00:16:06.228 "data_offset": 2048, 00:16:06.228 "data_size": 63488 00:16:06.228 }, 00:16:06.228 { 00:16:06.228 "name": "BaseBdev3", 00:16:06.228 "uuid": "da0fb08e-748f-5589-a7eb-139e53ba1e71", 00:16:06.228 "is_configured": true, 00:16:06.228 "data_offset": 2048, 00:16:06.228 "data_size": 63488 00:16:06.228 } 00:16:06.228 ] 00:16:06.228 }' 00:16:06.228 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.228 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:06.228 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.228 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:06.228 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:06.228 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.228 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.228 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.228 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:06.228 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.228 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.228 [2024-10-15 09:15:24.009738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:06.228 [2024-10-15 09:15:24.009835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.228 [2024-10-15 09:15:24.009868] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:06.228 [2024-10-15 09:15:24.009880] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.228 [2024-10-15 09:15:24.010435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.228 [2024-10-15 09:15:24.010469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:06.228 [2024-10-15 09:15:24.010577] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:06.228 [2024-10-15 09:15:24.010594] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:06.228 [2024-10-15 09:15:24.010621] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:06.228 [2024-10-15 09:15:24.010635] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:06.228 BaseBdev1 00:16:06.228 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.228 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:07.167 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:07.167 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.167 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.167 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.167 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.167 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:07.167 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.167 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.167 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.167 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.167 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.167 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.167 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.167 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.167 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.426 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.426 "name": "raid_bdev1", 00:16:07.426 "uuid": "5297107a-e12f-4f5b-a087-a3476ecba265", 00:16:07.426 "strip_size_kb": 64, 00:16:07.426 "state": "online", 00:16:07.426 "raid_level": "raid5f", 00:16:07.426 "superblock": true, 00:16:07.426 "num_base_bdevs": 3, 00:16:07.426 "num_base_bdevs_discovered": 2, 00:16:07.426 "num_base_bdevs_operational": 2, 00:16:07.426 "base_bdevs_list": [ 00:16:07.426 { 00:16:07.426 "name": null, 00:16:07.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.426 "is_configured": false, 00:16:07.426 "data_offset": 0, 00:16:07.426 "data_size": 63488 00:16:07.426 }, 00:16:07.426 { 00:16:07.426 "name": "BaseBdev2", 00:16:07.426 "uuid": "c9589d22-6530-5a38-8665-07f2b838f46b", 00:16:07.426 "is_configured": true, 00:16:07.426 "data_offset": 2048, 00:16:07.426 "data_size": 63488 00:16:07.426 }, 00:16:07.426 { 00:16:07.426 "name": "BaseBdev3", 00:16:07.426 "uuid": "da0fb08e-748f-5589-a7eb-139e53ba1e71", 00:16:07.426 "is_configured": true, 00:16:07.426 "data_offset": 2048, 00:16:07.426 "data_size": 63488 00:16:07.426 } 00:16:07.426 ] 00:16:07.426 }' 00:16:07.426 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.426 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.684 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:07.684 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.684 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:07.684 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:07.684 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.684 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.684 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.684 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.684 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.684 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.684 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.684 "name": "raid_bdev1", 00:16:07.684 "uuid": "5297107a-e12f-4f5b-a087-a3476ecba265", 00:16:07.685 "strip_size_kb": 64, 00:16:07.685 "state": "online", 00:16:07.685 "raid_level": "raid5f", 00:16:07.685 "superblock": true, 00:16:07.685 "num_base_bdevs": 3, 00:16:07.685 "num_base_bdevs_discovered": 2, 00:16:07.685 "num_base_bdevs_operational": 2, 00:16:07.685 "base_bdevs_list": [ 00:16:07.685 { 00:16:07.685 "name": null, 00:16:07.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.685 "is_configured": false, 00:16:07.685 "data_offset": 0, 00:16:07.685 "data_size": 63488 00:16:07.685 }, 00:16:07.685 { 00:16:07.685 "name": "BaseBdev2", 00:16:07.685 "uuid": "c9589d22-6530-5a38-8665-07f2b838f46b", 00:16:07.685 "is_configured": true, 00:16:07.685 "data_offset": 2048, 00:16:07.685 "data_size": 63488 00:16:07.685 }, 00:16:07.685 { 00:16:07.685 "name": "BaseBdev3", 00:16:07.685 "uuid": "da0fb08e-748f-5589-a7eb-139e53ba1e71", 00:16:07.685 "is_configured": true, 00:16:07.685 "data_offset": 2048, 00:16:07.685 "data_size": 63488 00:16:07.685 } 00:16:07.685 ] 00:16:07.685 }' 00:16:07.685 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.685 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:07.685 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.685 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:07.685 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:07.685 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:16:07.685 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:07.685 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:07.944 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:07.944 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:07.944 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:07.944 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:07.944 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.944 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.944 [2024-10-15 09:15:25.587477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:07.944 [2024-10-15 09:15:25.587681] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:07.944 [2024-10-15 09:15:25.587700] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:07.944 request: 00:16:07.944 { 00:16:07.944 "base_bdev": "BaseBdev1", 00:16:07.944 "raid_bdev": "raid_bdev1", 00:16:07.944 "method": "bdev_raid_add_base_bdev", 00:16:07.944 "req_id": 1 00:16:07.944 } 00:16:07.944 Got JSON-RPC error response 00:16:07.944 response: 00:16:07.944 { 00:16:07.944 "code": -22, 00:16:07.944 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:07.944 } 00:16:07.944 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:07.944 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:16:07.944 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:07.944 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:07.944 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:07.944 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:08.881 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:08.881 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.881 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.881 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.881 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.881 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:08.881 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.881 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.881 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.881 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.881 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.881 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.881 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.881 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.881 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.881 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.881 "name": "raid_bdev1", 00:16:08.881 "uuid": "5297107a-e12f-4f5b-a087-a3476ecba265", 00:16:08.881 "strip_size_kb": 64, 00:16:08.881 "state": "online", 00:16:08.881 "raid_level": "raid5f", 00:16:08.881 "superblock": true, 00:16:08.881 "num_base_bdevs": 3, 00:16:08.881 "num_base_bdevs_discovered": 2, 00:16:08.881 "num_base_bdevs_operational": 2, 00:16:08.881 "base_bdevs_list": [ 00:16:08.881 { 00:16:08.881 "name": null, 00:16:08.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.881 "is_configured": false, 00:16:08.881 "data_offset": 0, 00:16:08.881 "data_size": 63488 00:16:08.881 }, 00:16:08.881 { 00:16:08.881 "name": "BaseBdev2", 00:16:08.881 "uuid": "c9589d22-6530-5a38-8665-07f2b838f46b", 00:16:08.881 "is_configured": true, 00:16:08.881 "data_offset": 2048, 00:16:08.881 "data_size": 63488 00:16:08.881 }, 00:16:08.881 { 00:16:08.881 "name": "BaseBdev3", 00:16:08.881 "uuid": "da0fb08e-748f-5589-a7eb-139e53ba1e71", 00:16:08.881 "is_configured": true, 00:16:08.881 "data_offset": 2048, 00:16:08.881 "data_size": 63488 00:16:08.881 } 00:16:08.881 ] 00:16:08.881 }' 00:16:08.881 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.881 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.450 09:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:09.450 09:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.450 09:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:09.450 09:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:09.450 09:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.450 09:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.450 09:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.450 09:15:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.450 09:15:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.450 09:15:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.450 09:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.450 "name": "raid_bdev1", 00:16:09.450 "uuid": "5297107a-e12f-4f5b-a087-a3476ecba265", 00:16:09.450 "strip_size_kb": 64, 00:16:09.450 "state": "online", 00:16:09.450 "raid_level": "raid5f", 00:16:09.450 "superblock": true, 00:16:09.450 "num_base_bdevs": 3, 00:16:09.450 "num_base_bdevs_discovered": 2, 00:16:09.450 "num_base_bdevs_operational": 2, 00:16:09.450 "base_bdevs_list": [ 00:16:09.450 { 00:16:09.450 "name": null, 00:16:09.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.450 "is_configured": false, 00:16:09.450 "data_offset": 0, 00:16:09.450 "data_size": 63488 00:16:09.450 }, 00:16:09.450 { 00:16:09.450 "name": "BaseBdev2", 00:16:09.450 "uuid": "c9589d22-6530-5a38-8665-07f2b838f46b", 00:16:09.450 "is_configured": true, 00:16:09.450 "data_offset": 2048, 00:16:09.450 "data_size": 63488 00:16:09.450 }, 00:16:09.450 { 00:16:09.450 "name": "BaseBdev3", 00:16:09.450 "uuid": "da0fb08e-748f-5589-a7eb-139e53ba1e71", 00:16:09.450 "is_configured": true, 00:16:09.450 "data_offset": 2048, 00:16:09.450 "data_size": 63488 00:16:09.450 } 00:16:09.450 ] 00:16:09.450 }' 00:16:09.450 09:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.450 09:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:09.450 09:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.450 09:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:09.450 09:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82294 00:16:09.450 09:15:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 82294 ']' 00:16:09.450 09:15:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 82294 00:16:09.450 09:15:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:16:09.450 09:15:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:09.450 09:15:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82294 00:16:09.450 09:15:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:09.450 killing process with pid 82294 00:16:09.450 Received shutdown signal, test time was about 60.000000 seconds 00:16:09.450 00:16:09.450 Latency(us) 00:16:09.450 [2024-10-15T09:15:27.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:09.450 [2024-10-15T09:15:27.346Z] =================================================================================================================== 00:16:09.450 [2024-10-15T09:15:27.346Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:09.450 09:15:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:09.450 09:15:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82294' 00:16:09.450 09:15:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 82294 00:16:09.450 [2024-10-15 09:15:27.258633] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:09.450 09:15:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 82294 00:16:09.450 [2024-10-15 09:15:27.258829] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:09.450 [2024-10-15 09:15:27.258910] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:09.450 [2024-10-15 09:15:27.258925] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:10.018 [2024-10-15 09:15:27.714356] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:11.393 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:11.393 00:16:11.393 real 0m24.199s 00:16:11.393 user 0m31.027s 00:16:11.393 sys 0m3.011s 00:16:11.393 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:11.393 ************************************ 00:16:11.393 END TEST raid5f_rebuild_test_sb 00:16:11.393 ************************************ 00:16:11.393 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.393 09:15:29 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:11.393 09:15:29 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:16:11.393 09:15:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:11.393 09:15:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:11.393 09:15:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:11.393 ************************************ 00:16:11.393 START TEST raid5f_state_function_test 00:16:11.393 ************************************ 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83071 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83071' 00:16:11.393 Process raid pid: 83071 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83071 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 83071 ']' 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:11.393 09:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.393 [2024-10-15 09:15:29.155181] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:16:11.393 [2024-10-15 09:15:29.155303] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:11.652 [2024-10-15 09:15:29.324566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.653 [2024-10-15 09:15:29.458315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.912 [2024-10-15 09:15:29.702486] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:11.912 [2024-10-15 09:15:29.702540] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:12.170 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:12.170 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:16:12.170 09:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:12.170 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.170 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.170 [2024-10-15 09:15:30.045617] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:12.170 [2024-10-15 09:15:30.045714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:12.170 [2024-10-15 09:15:30.045735] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:12.170 [2024-10-15 09:15:30.045753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:12.170 [2024-10-15 09:15:30.045766] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:12.170 [2024-10-15 09:15:30.045783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:12.170 [2024-10-15 09:15:30.045799] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:12.170 [2024-10-15 09:15:30.045816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:12.170 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.171 09:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:12.171 09:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:12.171 09:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:12.171 09:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:12.171 09:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.171 09:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:12.171 09:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.171 09:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.171 09:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.171 09:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.171 09:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.171 09:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.171 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.171 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.430 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.430 09:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.430 "name": "Existed_Raid", 00:16:12.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.430 "strip_size_kb": 64, 00:16:12.430 "state": "configuring", 00:16:12.430 "raid_level": "raid5f", 00:16:12.430 "superblock": false, 00:16:12.430 "num_base_bdevs": 4, 00:16:12.430 "num_base_bdevs_discovered": 0, 00:16:12.430 "num_base_bdevs_operational": 4, 00:16:12.430 "base_bdevs_list": [ 00:16:12.430 { 00:16:12.430 "name": "BaseBdev1", 00:16:12.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.430 "is_configured": false, 00:16:12.430 "data_offset": 0, 00:16:12.430 "data_size": 0 00:16:12.430 }, 00:16:12.430 { 00:16:12.430 "name": "BaseBdev2", 00:16:12.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.430 "is_configured": false, 00:16:12.430 "data_offset": 0, 00:16:12.430 "data_size": 0 00:16:12.430 }, 00:16:12.430 { 00:16:12.430 "name": "BaseBdev3", 00:16:12.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.430 "is_configured": false, 00:16:12.430 "data_offset": 0, 00:16:12.430 "data_size": 0 00:16:12.430 }, 00:16:12.430 { 00:16:12.430 "name": "BaseBdev4", 00:16:12.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.430 "is_configured": false, 00:16:12.430 "data_offset": 0, 00:16:12.430 "data_size": 0 00:16:12.430 } 00:16:12.430 ] 00:16:12.430 }' 00:16:12.430 09:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.430 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.688 09:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:12.688 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.688 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.688 [2024-10-15 09:15:30.520732] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:12.688 [2024-10-15 09:15:30.520781] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:12.688 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.688 09:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:12.688 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.688 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.688 [2024-10-15 09:15:30.532752] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:12.688 [2024-10-15 09:15:30.532810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:12.688 [2024-10-15 09:15:30.532827] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:12.688 [2024-10-15 09:15:30.532844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:12.688 [2024-10-15 09:15:30.532856] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:12.688 [2024-10-15 09:15:30.532874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:12.688 [2024-10-15 09:15:30.532892] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:12.688 [2024-10-15 09:15:30.532909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:12.688 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.688 09:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:12.688 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.688 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.947 [2024-10-15 09:15:30.588180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:12.947 BaseBdev1 00:16:12.947 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.947 09:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:12.947 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:12.947 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:12.947 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:12.947 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:12.947 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:12.947 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:12.947 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.947 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.947 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.947 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:12.947 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.948 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.948 [ 00:16:12.948 { 00:16:12.948 "name": "BaseBdev1", 00:16:12.948 "aliases": [ 00:16:12.948 "b0fd2d07-04bc-47e0-b895-0fdacf9d1481" 00:16:12.948 ], 00:16:12.948 "product_name": "Malloc disk", 00:16:12.948 "block_size": 512, 00:16:12.948 "num_blocks": 65536, 00:16:12.948 "uuid": "b0fd2d07-04bc-47e0-b895-0fdacf9d1481", 00:16:12.948 "assigned_rate_limits": { 00:16:12.948 "rw_ios_per_sec": 0, 00:16:12.948 "rw_mbytes_per_sec": 0, 00:16:12.948 "r_mbytes_per_sec": 0, 00:16:12.948 "w_mbytes_per_sec": 0 00:16:12.948 }, 00:16:12.948 "claimed": true, 00:16:12.948 "claim_type": "exclusive_write", 00:16:12.948 "zoned": false, 00:16:12.948 "supported_io_types": { 00:16:12.948 "read": true, 00:16:12.948 "write": true, 00:16:12.948 "unmap": true, 00:16:12.948 "flush": true, 00:16:12.948 "reset": true, 00:16:12.948 "nvme_admin": false, 00:16:12.948 "nvme_io": false, 00:16:12.948 "nvme_io_md": false, 00:16:12.948 "write_zeroes": true, 00:16:12.948 "zcopy": true, 00:16:12.948 "get_zone_info": false, 00:16:12.948 "zone_management": false, 00:16:12.948 "zone_append": false, 00:16:12.948 "compare": false, 00:16:12.948 "compare_and_write": false, 00:16:12.948 "abort": true, 00:16:12.948 "seek_hole": false, 00:16:12.948 "seek_data": false, 00:16:12.948 "copy": true, 00:16:12.948 "nvme_iov_md": false 00:16:12.948 }, 00:16:12.948 "memory_domains": [ 00:16:12.948 { 00:16:12.948 "dma_device_id": "system", 00:16:12.948 "dma_device_type": 1 00:16:12.948 }, 00:16:12.948 { 00:16:12.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.948 "dma_device_type": 2 00:16:12.948 } 00:16:12.948 ], 00:16:12.948 "driver_specific": {} 00:16:12.948 } 00:16:12.948 ] 00:16:12.948 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.948 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:12.948 09:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:12.948 09:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:12.948 09:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:12.948 09:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:12.948 09:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.948 09:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:12.948 09:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.948 09:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.948 09:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.948 09:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.948 09:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.948 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.948 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.948 09:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.948 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.948 09:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.948 "name": "Existed_Raid", 00:16:12.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.948 "strip_size_kb": 64, 00:16:12.948 "state": "configuring", 00:16:12.948 "raid_level": "raid5f", 00:16:12.948 "superblock": false, 00:16:12.948 "num_base_bdevs": 4, 00:16:12.948 "num_base_bdevs_discovered": 1, 00:16:12.948 "num_base_bdevs_operational": 4, 00:16:12.948 "base_bdevs_list": [ 00:16:12.948 { 00:16:12.948 "name": "BaseBdev1", 00:16:12.948 "uuid": "b0fd2d07-04bc-47e0-b895-0fdacf9d1481", 00:16:12.948 "is_configured": true, 00:16:12.948 "data_offset": 0, 00:16:12.948 "data_size": 65536 00:16:12.948 }, 00:16:12.948 { 00:16:12.948 "name": "BaseBdev2", 00:16:12.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.948 "is_configured": false, 00:16:12.948 "data_offset": 0, 00:16:12.948 "data_size": 0 00:16:12.948 }, 00:16:12.948 { 00:16:12.948 "name": "BaseBdev3", 00:16:12.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.948 "is_configured": false, 00:16:12.948 "data_offset": 0, 00:16:12.948 "data_size": 0 00:16:12.948 }, 00:16:12.948 { 00:16:12.948 "name": "BaseBdev4", 00:16:12.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.948 "is_configured": false, 00:16:12.948 "data_offset": 0, 00:16:12.948 "data_size": 0 00:16:12.948 } 00:16:12.948 ] 00:16:12.948 }' 00:16:12.948 09:15:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.948 09:15:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.208 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:13.208 09:15:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.208 09:15:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.208 [2024-10-15 09:15:31.083430] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:13.208 [2024-10-15 09:15:31.083500] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:13.208 09:15:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.208 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:13.208 09:15:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.208 09:15:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.208 [2024-10-15 09:15:31.095489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:13.208 [2024-10-15 09:15:31.097557] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:13.208 [2024-10-15 09:15:31.097649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:13.208 [2024-10-15 09:15:31.097668] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:13.208 [2024-10-15 09:15:31.097699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:13.208 [2024-10-15 09:15:31.097714] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:13.208 [2024-10-15 09:15:31.097731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:13.208 09:15:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.208 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:13.208 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:13.208 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:13.208 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:13.208 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:13.208 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:13.208 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.208 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:13.208 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.208 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.208 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.208 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.467 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.467 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.467 09:15:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.467 09:15:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.467 09:15:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.467 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.467 "name": "Existed_Raid", 00:16:13.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.467 "strip_size_kb": 64, 00:16:13.467 "state": "configuring", 00:16:13.467 "raid_level": "raid5f", 00:16:13.467 "superblock": false, 00:16:13.467 "num_base_bdevs": 4, 00:16:13.467 "num_base_bdevs_discovered": 1, 00:16:13.467 "num_base_bdevs_operational": 4, 00:16:13.467 "base_bdevs_list": [ 00:16:13.467 { 00:16:13.467 "name": "BaseBdev1", 00:16:13.467 "uuid": "b0fd2d07-04bc-47e0-b895-0fdacf9d1481", 00:16:13.467 "is_configured": true, 00:16:13.467 "data_offset": 0, 00:16:13.467 "data_size": 65536 00:16:13.467 }, 00:16:13.467 { 00:16:13.467 "name": "BaseBdev2", 00:16:13.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.467 "is_configured": false, 00:16:13.467 "data_offset": 0, 00:16:13.467 "data_size": 0 00:16:13.467 }, 00:16:13.467 { 00:16:13.467 "name": "BaseBdev3", 00:16:13.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.467 "is_configured": false, 00:16:13.467 "data_offset": 0, 00:16:13.467 "data_size": 0 00:16:13.467 }, 00:16:13.467 { 00:16:13.467 "name": "BaseBdev4", 00:16:13.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.467 "is_configured": false, 00:16:13.467 "data_offset": 0, 00:16:13.467 "data_size": 0 00:16:13.467 } 00:16:13.467 ] 00:16:13.467 }' 00:16:13.467 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.467 09:15:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.727 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:13.727 09:15:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.727 09:15:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.727 [2024-10-15 09:15:31.577433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:13.727 BaseBdev2 00:16:13.727 09:15:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.727 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:13.728 09:15:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:13.728 09:15:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:13.728 09:15:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:13.728 09:15:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:13.728 09:15:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:13.728 09:15:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:13.728 09:15:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.728 09:15:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.728 09:15:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.728 09:15:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:13.728 09:15:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.728 09:15:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.728 [ 00:16:13.728 { 00:16:13.728 "name": "BaseBdev2", 00:16:13.728 "aliases": [ 00:16:13.728 "6f896845-eee9-47e9-8e51-ef6a46911bf4" 00:16:13.728 ], 00:16:13.728 "product_name": "Malloc disk", 00:16:13.728 "block_size": 512, 00:16:13.728 "num_blocks": 65536, 00:16:13.728 "uuid": "6f896845-eee9-47e9-8e51-ef6a46911bf4", 00:16:13.728 "assigned_rate_limits": { 00:16:13.728 "rw_ios_per_sec": 0, 00:16:13.728 "rw_mbytes_per_sec": 0, 00:16:13.728 "r_mbytes_per_sec": 0, 00:16:13.728 "w_mbytes_per_sec": 0 00:16:13.728 }, 00:16:13.728 "claimed": true, 00:16:13.728 "claim_type": "exclusive_write", 00:16:13.728 "zoned": false, 00:16:13.728 "supported_io_types": { 00:16:13.728 "read": true, 00:16:13.728 "write": true, 00:16:13.728 "unmap": true, 00:16:13.728 "flush": true, 00:16:13.728 "reset": true, 00:16:13.728 "nvme_admin": false, 00:16:13.728 "nvme_io": false, 00:16:13.728 "nvme_io_md": false, 00:16:13.728 "write_zeroes": true, 00:16:13.728 "zcopy": true, 00:16:13.728 "get_zone_info": false, 00:16:13.728 "zone_management": false, 00:16:13.728 "zone_append": false, 00:16:13.728 "compare": false, 00:16:13.728 "compare_and_write": false, 00:16:13.728 "abort": true, 00:16:13.728 "seek_hole": false, 00:16:13.728 "seek_data": false, 00:16:13.728 "copy": true, 00:16:13.728 "nvme_iov_md": false 00:16:13.728 }, 00:16:13.728 "memory_domains": [ 00:16:13.728 { 00:16:13.728 "dma_device_id": "system", 00:16:13.728 "dma_device_type": 1 00:16:13.728 }, 00:16:13.728 { 00:16:13.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.728 "dma_device_type": 2 00:16:13.728 } 00:16:13.728 ], 00:16:13.728 "driver_specific": {} 00:16:13.728 } 00:16:13.728 ] 00:16:13.728 09:15:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.728 09:15:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:13.728 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:13.728 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:13.728 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:13.728 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:13.728 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:13.728 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:13.728 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.728 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:13.728 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.728 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.728 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.728 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.728 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.728 09:15:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.728 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.728 09:15:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.988 09:15:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.988 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.988 "name": "Existed_Raid", 00:16:13.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.988 "strip_size_kb": 64, 00:16:13.988 "state": "configuring", 00:16:13.988 "raid_level": "raid5f", 00:16:13.988 "superblock": false, 00:16:13.988 "num_base_bdevs": 4, 00:16:13.988 "num_base_bdevs_discovered": 2, 00:16:13.988 "num_base_bdevs_operational": 4, 00:16:13.988 "base_bdevs_list": [ 00:16:13.988 { 00:16:13.988 "name": "BaseBdev1", 00:16:13.988 "uuid": "b0fd2d07-04bc-47e0-b895-0fdacf9d1481", 00:16:13.988 "is_configured": true, 00:16:13.988 "data_offset": 0, 00:16:13.988 "data_size": 65536 00:16:13.988 }, 00:16:13.988 { 00:16:13.988 "name": "BaseBdev2", 00:16:13.988 "uuid": "6f896845-eee9-47e9-8e51-ef6a46911bf4", 00:16:13.988 "is_configured": true, 00:16:13.988 "data_offset": 0, 00:16:13.988 "data_size": 65536 00:16:13.988 }, 00:16:13.988 { 00:16:13.988 "name": "BaseBdev3", 00:16:13.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.988 "is_configured": false, 00:16:13.988 "data_offset": 0, 00:16:13.988 "data_size": 0 00:16:13.988 }, 00:16:13.988 { 00:16:13.988 "name": "BaseBdev4", 00:16:13.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.988 "is_configured": false, 00:16:13.988 "data_offset": 0, 00:16:13.988 "data_size": 0 00:16:13.988 } 00:16:13.988 ] 00:16:13.988 }' 00:16:13.988 09:15:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.988 09:15:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.248 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:14.248 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.248 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.248 [2024-10-15 09:15:32.136481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:14.248 BaseBdev3 00:16:14.248 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.248 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:14.248 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:14.248 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:14.248 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:14.248 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:14.248 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:14.248 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:14.248 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.248 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.507 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.507 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:14.507 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.507 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.507 [ 00:16:14.507 { 00:16:14.507 "name": "BaseBdev3", 00:16:14.507 "aliases": [ 00:16:14.507 "a84ba480-169b-4bb0-8025-da6e1f104b77" 00:16:14.507 ], 00:16:14.507 "product_name": "Malloc disk", 00:16:14.507 "block_size": 512, 00:16:14.507 "num_blocks": 65536, 00:16:14.507 "uuid": "a84ba480-169b-4bb0-8025-da6e1f104b77", 00:16:14.507 "assigned_rate_limits": { 00:16:14.507 "rw_ios_per_sec": 0, 00:16:14.507 "rw_mbytes_per_sec": 0, 00:16:14.507 "r_mbytes_per_sec": 0, 00:16:14.507 "w_mbytes_per_sec": 0 00:16:14.507 }, 00:16:14.507 "claimed": true, 00:16:14.507 "claim_type": "exclusive_write", 00:16:14.507 "zoned": false, 00:16:14.507 "supported_io_types": { 00:16:14.507 "read": true, 00:16:14.507 "write": true, 00:16:14.507 "unmap": true, 00:16:14.507 "flush": true, 00:16:14.507 "reset": true, 00:16:14.507 "nvme_admin": false, 00:16:14.507 "nvme_io": false, 00:16:14.507 "nvme_io_md": false, 00:16:14.507 "write_zeroes": true, 00:16:14.507 "zcopy": true, 00:16:14.507 "get_zone_info": false, 00:16:14.507 "zone_management": false, 00:16:14.507 "zone_append": false, 00:16:14.507 "compare": false, 00:16:14.507 "compare_and_write": false, 00:16:14.507 "abort": true, 00:16:14.507 "seek_hole": false, 00:16:14.508 "seek_data": false, 00:16:14.508 "copy": true, 00:16:14.508 "nvme_iov_md": false 00:16:14.508 }, 00:16:14.508 "memory_domains": [ 00:16:14.508 { 00:16:14.508 "dma_device_id": "system", 00:16:14.508 "dma_device_type": 1 00:16:14.508 }, 00:16:14.508 { 00:16:14.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.508 "dma_device_type": 2 00:16:14.508 } 00:16:14.508 ], 00:16:14.508 "driver_specific": {} 00:16:14.508 } 00:16:14.508 ] 00:16:14.508 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.508 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:14.508 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:14.508 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:14.508 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:14.508 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:14.508 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:14.508 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:14.508 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.508 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:14.508 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.508 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.508 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.508 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.508 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.508 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.508 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.508 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.508 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.508 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.508 "name": "Existed_Raid", 00:16:14.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.508 "strip_size_kb": 64, 00:16:14.508 "state": "configuring", 00:16:14.508 "raid_level": "raid5f", 00:16:14.508 "superblock": false, 00:16:14.508 "num_base_bdevs": 4, 00:16:14.508 "num_base_bdevs_discovered": 3, 00:16:14.508 "num_base_bdevs_operational": 4, 00:16:14.508 "base_bdevs_list": [ 00:16:14.508 { 00:16:14.508 "name": "BaseBdev1", 00:16:14.508 "uuid": "b0fd2d07-04bc-47e0-b895-0fdacf9d1481", 00:16:14.508 "is_configured": true, 00:16:14.508 "data_offset": 0, 00:16:14.508 "data_size": 65536 00:16:14.508 }, 00:16:14.508 { 00:16:14.508 "name": "BaseBdev2", 00:16:14.508 "uuid": "6f896845-eee9-47e9-8e51-ef6a46911bf4", 00:16:14.508 "is_configured": true, 00:16:14.508 "data_offset": 0, 00:16:14.508 "data_size": 65536 00:16:14.508 }, 00:16:14.508 { 00:16:14.508 "name": "BaseBdev3", 00:16:14.508 "uuid": "a84ba480-169b-4bb0-8025-da6e1f104b77", 00:16:14.508 "is_configured": true, 00:16:14.508 "data_offset": 0, 00:16:14.508 "data_size": 65536 00:16:14.508 }, 00:16:14.508 { 00:16:14.508 "name": "BaseBdev4", 00:16:14.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.508 "is_configured": false, 00:16:14.508 "data_offset": 0, 00:16:14.508 "data_size": 0 00:16:14.508 } 00:16:14.508 ] 00:16:14.508 }' 00:16:14.508 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.508 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.077 [2024-10-15 09:15:32.708787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:15.077 [2024-10-15 09:15:32.708879] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:15.077 [2024-10-15 09:15:32.708889] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:15.077 [2024-10-15 09:15:32.709196] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:15.077 [2024-10-15 09:15:32.717099] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:15.077 [2024-10-15 09:15:32.717126] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:15.077 [2024-10-15 09:15:32.717458] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.077 BaseBdev4 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.077 [ 00:16:15.077 { 00:16:15.077 "name": "BaseBdev4", 00:16:15.077 "aliases": [ 00:16:15.077 "594e11a7-60d9-46ab-863d-b38792d8e2e4" 00:16:15.077 ], 00:16:15.077 "product_name": "Malloc disk", 00:16:15.077 "block_size": 512, 00:16:15.077 "num_blocks": 65536, 00:16:15.077 "uuid": "594e11a7-60d9-46ab-863d-b38792d8e2e4", 00:16:15.077 "assigned_rate_limits": { 00:16:15.077 "rw_ios_per_sec": 0, 00:16:15.077 "rw_mbytes_per_sec": 0, 00:16:15.077 "r_mbytes_per_sec": 0, 00:16:15.077 "w_mbytes_per_sec": 0 00:16:15.077 }, 00:16:15.077 "claimed": true, 00:16:15.077 "claim_type": "exclusive_write", 00:16:15.077 "zoned": false, 00:16:15.077 "supported_io_types": { 00:16:15.077 "read": true, 00:16:15.077 "write": true, 00:16:15.077 "unmap": true, 00:16:15.077 "flush": true, 00:16:15.077 "reset": true, 00:16:15.077 "nvme_admin": false, 00:16:15.077 "nvme_io": false, 00:16:15.077 "nvme_io_md": false, 00:16:15.077 "write_zeroes": true, 00:16:15.077 "zcopy": true, 00:16:15.077 "get_zone_info": false, 00:16:15.077 "zone_management": false, 00:16:15.077 "zone_append": false, 00:16:15.077 "compare": false, 00:16:15.077 "compare_and_write": false, 00:16:15.077 "abort": true, 00:16:15.077 "seek_hole": false, 00:16:15.077 "seek_data": false, 00:16:15.077 "copy": true, 00:16:15.077 "nvme_iov_md": false 00:16:15.077 }, 00:16:15.077 "memory_domains": [ 00:16:15.077 { 00:16:15.077 "dma_device_id": "system", 00:16:15.077 "dma_device_type": 1 00:16:15.077 }, 00:16:15.077 { 00:16:15.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.077 "dma_device_type": 2 00:16:15.077 } 00:16:15.077 ], 00:16:15.077 "driver_specific": {} 00:16:15.077 } 00:16:15.077 ] 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.077 "name": "Existed_Raid", 00:16:15.077 "uuid": "802cdf3d-b7d4-4bb5-8e1a-48872d988f56", 00:16:15.077 "strip_size_kb": 64, 00:16:15.077 "state": "online", 00:16:15.077 "raid_level": "raid5f", 00:16:15.077 "superblock": false, 00:16:15.077 "num_base_bdevs": 4, 00:16:15.077 "num_base_bdevs_discovered": 4, 00:16:15.077 "num_base_bdevs_operational": 4, 00:16:15.077 "base_bdevs_list": [ 00:16:15.077 { 00:16:15.077 "name": "BaseBdev1", 00:16:15.077 "uuid": "b0fd2d07-04bc-47e0-b895-0fdacf9d1481", 00:16:15.077 "is_configured": true, 00:16:15.077 "data_offset": 0, 00:16:15.077 "data_size": 65536 00:16:15.077 }, 00:16:15.077 { 00:16:15.077 "name": "BaseBdev2", 00:16:15.077 "uuid": "6f896845-eee9-47e9-8e51-ef6a46911bf4", 00:16:15.077 "is_configured": true, 00:16:15.077 "data_offset": 0, 00:16:15.077 "data_size": 65536 00:16:15.077 }, 00:16:15.077 { 00:16:15.077 "name": "BaseBdev3", 00:16:15.077 "uuid": "a84ba480-169b-4bb0-8025-da6e1f104b77", 00:16:15.077 "is_configured": true, 00:16:15.077 "data_offset": 0, 00:16:15.077 "data_size": 65536 00:16:15.077 }, 00:16:15.077 { 00:16:15.077 "name": "BaseBdev4", 00:16:15.077 "uuid": "594e11a7-60d9-46ab-863d-b38792d8e2e4", 00:16:15.077 "is_configured": true, 00:16:15.077 "data_offset": 0, 00:16:15.077 "data_size": 65536 00:16:15.077 } 00:16:15.077 ] 00:16:15.077 }' 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.077 09:15:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.646 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:15.646 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:15.646 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.647 [2024-10-15 09:15:33.250634] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:15.647 "name": "Existed_Raid", 00:16:15.647 "aliases": [ 00:16:15.647 "802cdf3d-b7d4-4bb5-8e1a-48872d988f56" 00:16:15.647 ], 00:16:15.647 "product_name": "Raid Volume", 00:16:15.647 "block_size": 512, 00:16:15.647 "num_blocks": 196608, 00:16:15.647 "uuid": "802cdf3d-b7d4-4bb5-8e1a-48872d988f56", 00:16:15.647 "assigned_rate_limits": { 00:16:15.647 "rw_ios_per_sec": 0, 00:16:15.647 "rw_mbytes_per_sec": 0, 00:16:15.647 "r_mbytes_per_sec": 0, 00:16:15.647 "w_mbytes_per_sec": 0 00:16:15.647 }, 00:16:15.647 "claimed": false, 00:16:15.647 "zoned": false, 00:16:15.647 "supported_io_types": { 00:16:15.647 "read": true, 00:16:15.647 "write": true, 00:16:15.647 "unmap": false, 00:16:15.647 "flush": false, 00:16:15.647 "reset": true, 00:16:15.647 "nvme_admin": false, 00:16:15.647 "nvme_io": false, 00:16:15.647 "nvme_io_md": false, 00:16:15.647 "write_zeroes": true, 00:16:15.647 "zcopy": false, 00:16:15.647 "get_zone_info": false, 00:16:15.647 "zone_management": false, 00:16:15.647 "zone_append": false, 00:16:15.647 "compare": false, 00:16:15.647 "compare_and_write": false, 00:16:15.647 "abort": false, 00:16:15.647 "seek_hole": false, 00:16:15.647 "seek_data": false, 00:16:15.647 "copy": false, 00:16:15.647 "nvme_iov_md": false 00:16:15.647 }, 00:16:15.647 "driver_specific": { 00:16:15.647 "raid": { 00:16:15.647 "uuid": "802cdf3d-b7d4-4bb5-8e1a-48872d988f56", 00:16:15.647 "strip_size_kb": 64, 00:16:15.647 "state": "online", 00:16:15.647 "raid_level": "raid5f", 00:16:15.647 "superblock": false, 00:16:15.647 "num_base_bdevs": 4, 00:16:15.647 "num_base_bdevs_discovered": 4, 00:16:15.647 "num_base_bdevs_operational": 4, 00:16:15.647 "base_bdevs_list": [ 00:16:15.647 { 00:16:15.647 "name": "BaseBdev1", 00:16:15.647 "uuid": "b0fd2d07-04bc-47e0-b895-0fdacf9d1481", 00:16:15.647 "is_configured": true, 00:16:15.647 "data_offset": 0, 00:16:15.647 "data_size": 65536 00:16:15.647 }, 00:16:15.647 { 00:16:15.647 "name": "BaseBdev2", 00:16:15.647 "uuid": "6f896845-eee9-47e9-8e51-ef6a46911bf4", 00:16:15.647 "is_configured": true, 00:16:15.647 "data_offset": 0, 00:16:15.647 "data_size": 65536 00:16:15.647 }, 00:16:15.647 { 00:16:15.647 "name": "BaseBdev3", 00:16:15.647 "uuid": "a84ba480-169b-4bb0-8025-da6e1f104b77", 00:16:15.647 "is_configured": true, 00:16:15.647 "data_offset": 0, 00:16:15.647 "data_size": 65536 00:16:15.647 }, 00:16:15.647 { 00:16:15.647 "name": "BaseBdev4", 00:16:15.647 "uuid": "594e11a7-60d9-46ab-863d-b38792d8e2e4", 00:16:15.647 "is_configured": true, 00:16:15.647 "data_offset": 0, 00:16:15.647 "data_size": 65536 00:16:15.647 } 00:16:15.647 ] 00:16:15.647 } 00:16:15.647 } 00:16:15.647 }' 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:15.647 BaseBdev2 00:16:15.647 BaseBdev3 00:16:15.647 BaseBdev4' 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.647 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:15.907 09:15:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.907 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:15.907 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:15.907 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:15.907 09:15:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.907 09:15:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.907 [2024-10-15 09:15:33.581924] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:15.907 09:15:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.907 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:15.907 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:15.907 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:15.907 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:15.907 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:15.907 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:15.907 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:15.907 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.907 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:15.907 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.907 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:15.907 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.907 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.907 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.907 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.907 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.907 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.907 09:15:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.907 09:15:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.907 09:15:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.907 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.907 "name": "Existed_Raid", 00:16:15.908 "uuid": "802cdf3d-b7d4-4bb5-8e1a-48872d988f56", 00:16:15.908 "strip_size_kb": 64, 00:16:15.908 "state": "online", 00:16:15.908 "raid_level": "raid5f", 00:16:15.908 "superblock": false, 00:16:15.908 "num_base_bdevs": 4, 00:16:15.908 "num_base_bdevs_discovered": 3, 00:16:15.908 "num_base_bdevs_operational": 3, 00:16:15.908 "base_bdevs_list": [ 00:16:15.908 { 00:16:15.908 "name": null, 00:16:15.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.908 "is_configured": false, 00:16:15.908 "data_offset": 0, 00:16:15.908 "data_size": 65536 00:16:15.908 }, 00:16:15.908 { 00:16:15.908 "name": "BaseBdev2", 00:16:15.908 "uuid": "6f896845-eee9-47e9-8e51-ef6a46911bf4", 00:16:15.908 "is_configured": true, 00:16:15.908 "data_offset": 0, 00:16:15.908 "data_size": 65536 00:16:15.908 }, 00:16:15.908 { 00:16:15.908 "name": "BaseBdev3", 00:16:15.908 "uuid": "a84ba480-169b-4bb0-8025-da6e1f104b77", 00:16:15.908 "is_configured": true, 00:16:15.908 "data_offset": 0, 00:16:15.908 "data_size": 65536 00:16:15.908 }, 00:16:15.908 { 00:16:15.908 "name": "BaseBdev4", 00:16:15.908 "uuid": "594e11a7-60d9-46ab-863d-b38792d8e2e4", 00:16:15.908 "is_configured": true, 00:16:15.908 "data_offset": 0, 00:16:15.908 "data_size": 65536 00:16:15.908 } 00:16:15.908 ] 00:16:15.908 }' 00:16:15.908 09:15:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.908 09:15:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.478 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:16.478 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:16.478 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.478 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:16.478 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.478 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.478 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.479 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:16.479 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:16.479 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:16.479 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.479 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.479 [2024-10-15 09:15:34.209508] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:16.479 [2024-10-15 09:15:34.209654] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:16.479 [2024-10-15 09:15:34.315881] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:16.479 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.479 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:16.479 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:16.479 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.479 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:16.479 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.479 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.479 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.479 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:16.479 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:16.479 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:16.479 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.479 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.479 [2024-10-15 09:15:34.371872] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:16.739 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.739 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:16.739 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:16.739 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.739 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:16.739 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.739 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.739 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.739 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:16.739 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:16.739 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:16.739 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.739 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.739 [2024-10-15 09:15:34.534755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:16.739 [2024-10-15 09:15:34.534817] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:16.999 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.999 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:16.999 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:16.999 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.000 BaseBdev2 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.000 [ 00:16:17.000 { 00:16:17.000 "name": "BaseBdev2", 00:16:17.000 "aliases": [ 00:16:17.000 "0606d3da-7d7c-44a3-914f-67fe6ffa3ff9" 00:16:17.000 ], 00:16:17.000 "product_name": "Malloc disk", 00:16:17.000 "block_size": 512, 00:16:17.000 "num_blocks": 65536, 00:16:17.000 "uuid": "0606d3da-7d7c-44a3-914f-67fe6ffa3ff9", 00:16:17.000 "assigned_rate_limits": { 00:16:17.000 "rw_ios_per_sec": 0, 00:16:17.000 "rw_mbytes_per_sec": 0, 00:16:17.000 "r_mbytes_per_sec": 0, 00:16:17.000 "w_mbytes_per_sec": 0 00:16:17.000 }, 00:16:17.000 "claimed": false, 00:16:17.000 "zoned": false, 00:16:17.000 "supported_io_types": { 00:16:17.000 "read": true, 00:16:17.000 "write": true, 00:16:17.000 "unmap": true, 00:16:17.000 "flush": true, 00:16:17.000 "reset": true, 00:16:17.000 "nvme_admin": false, 00:16:17.000 "nvme_io": false, 00:16:17.000 "nvme_io_md": false, 00:16:17.000 "write_zeroes": true, 00:16:17.000 "zcopy": true, 00:16:17.000 "get_zone_info": false, 00:16:17.000 "zone_management": false, 00:16:17.000 "zone_append": false, 00:16:17.000 "compare": false, 00:16:17.000 "compare_and_write": false, 00:16:17.000 "abort": true, 00:16:17.000 "seek_hole": false, 00:16:17.000 "seek_data": false, 00:16:17.000 "copy": true, 00:16:17.000 "nvme_iov_md": false 00:16:17.000 }, 00:16:17.000 "memory_domains": [ 00:16:17.000 { 00:16:17.000 "dma_device_id": "system", 00:16:17.000 "dma_device_type": 1 00:16:17.000 }, 00:16:17.000 { 00:16:17.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.000 "dma_device_type": 2 00:16:17.000 } 00:16:17.000 ], 00:16:17.000 "driver_specific": {} 00:16:17.000 } 00:16:17.000 ] 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.000 BaseBdev3 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.000 [ 00:16:17.000 { 00:16:17.000 "name": "BaseBdev3", 00:16:17.000 "aliases": [ 00:16:17.000 "23104e6e-8294-4c02-89c0-b02ad2e99b42" 00:16:17.000 ], 00:16:17.000 "product_name": "Malloc disk", 00:16:17.000 "block_size": 512, 00:16:17.000 "num_blocks": 65536, 00:16:17.000 "uuid": "23104e6e-8294-4c02-89c0-b02ad2e99b42", 00:16:17.000 "assigned_rate_limits": { 00:16:17.000 "rw_ios_per_sec": 0, 00:16:17.000 "rw_mbytes_per_sec": 0, 00:16:17.000 "r_mbytes_per_sec": 0, 00:16:17.000 "w_mbytes_per_sec": 0 00:16:17.000 }, 00:16:17.000 "claimed": false, 00:16:17.000 "zoned": false, 00:16:17.000 "supported_io_types": { 00:16:17.000 "read": true, 00:16:17.000 "write": true, 00:16:17.000 "unmap": true, 00:16:17.000 "flush": true, 00:16:17.000 "reset": true, 00:16:17.000 "nvme_admin": false, 00:16:17.000 "nvme_io": false, 00:16:17.000 "nvme_io_md": false, 00:16:17.000 "write_zeroes": true, 00:16:17.000 "zcopy": true, 00:16:17.000 "get_zone_info": false, 00:16:17.000 "zone_management": false, 00:16:17.000 "zone_append": false, 00:16:17.000 "compare": false, 00:16:17.000 "compare_and_write": false, 00:16:17.000 "abort": true, 00:16:17.000 "seek_hole": false, 00:16:17.000 "seek_data": false, 00:16:17.000 "copy": true, 00:16:17.000 "nvme_iov_md": false 00:16:17.000 }, 00:16:17.000 "memory_domains": [ 00:16:17.000 { 00:16:17.000 "dma_device_id": "system", 00:16:17.000 "dma_device_type": 1 00:16:17.000 }, 00:16:17.000 { 00:16:17.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.000 "dma_device_type": 2 00:16:17.000 } 00:16:17.000 ], 00:16:17.000 "driver_specific": {} 00:16:17.000 } 00:16:17.000 ] 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.000 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.260 BaseBdev4 00:16:17.260 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.260 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:17.260 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:17.260 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:17.260 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:17.260 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:17.260 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:17.260 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:17.260 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.260 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.260 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.260 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:17.260 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.260 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.260 [ 00:16:17.260 { 00:16:17.260 "name": "BaseBdev4", 00:16:17.260 "aliases": [ 00:16:17.260 "820e7f39-fd0b-493a-ba7c-3ea8ea2da2d8" 00:16:17.260 ], 00:16:17.260 "product_name": "Malloc disk", 00:16:17.260 "block_size": 512, 00:16:17.260 "num_blocks": 65536, 00:16:17.260 "uuid": "820e7f39-fd0b-493a-ba7c-3ea8ea2da2d8", 00:16:17.260 "assigned_rate_limits": { 00:16:17.260 "rw_ios_per_sec": 0, 00:16:17.260 "rw_mbytes_per_sec": 0, 00:16:17.260 "r_mbytes_per_sec": 0, 00:16:17.260 "w_mbytes_per_sec": 0 00:16:17.260 }, 00:16:17.260 "claimed": false, 00:16:17.260 "zoned": false, 00:16:17.260 "supported_io_types": { 00:16:17.260 "read": true, 00:16:17.260 "write": true, 00:16:17.260 "unmap": true, 00:16:17.260 "flush": true, 00:16:17.260 "reset": true, 00:16:17.260 "nvme_admin": false, 00:16:17.260 "nvme_io": false, 00:16:17.260 "nvme_io_md": false, 00:16:17.260 "write_zeroes": true, 00:16:17.260 "zcopy": true, 00:16:17.260 "get_zone_info": false, 00:16:17.260 "zone_management": false, 00:16:17.260 "zone_append": false, 00:16:17.260 "compare": false, 00:16:17.260 "compare_and_write": false, 00:16:17.260 "abort": true, 00:16:17.260 "seek_hole": false, 00:16:17.260 "seek_data": false, 00:16:17.260 "copy": true, 00:16:17.260 "nvme_iov_md": false 00:16:17.260 }, 00:16:17.260 "memory_domains": [ 00:16:17.260 { 00:16:17.260 "dma_device_id": "system", 00:16:17.261 "dma_device_type": 1 00:16:17.261 }, 00:16:17.261 { 00:16:17.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.261 "dma_device_type": 2 00:16:17.261 } 00:16:17.261 ], 00:16:17.261 "driver_specific": {} 00:16:17.261 } 00:16:17.261 ] 00:16:17.261 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.261 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:17.261 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:17.261 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:17.261 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:17.261 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.261 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.261 [2024-10-15 09:15:34.939747] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:17.261 [2024-10-15 09:15:34.939848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:17.261 [2024-10-15 09:15:34.939912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:17.261 [2024-10-15 09:15:34.942184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:17.261 [2024-10-15 09:15:34.942297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:17.261 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.261 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:17.261 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.261 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.261 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.261 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.261 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.261 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.261 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.261 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.261 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.261 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.261 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.261 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.261 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.261 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.261 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.261 "name": "Existed_Raid", 00:16:17.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.261 "strip_size_kb": 64, 00:16:17.261 "state": "configuring", 00:16:17.261 "raid_level": "raid5f", 00:16:17.261 "superblock": false, 00:16:17.261 "num_base_bdevs": 4, 00:16:17.261 "num_base_bdevs_discovered": 3, 00:16:17.261 "num_base_bdevs_operational": 4, 00:16:17.261 "base_bdevs_list": [ 00:16:17.261 { 00:16:17.261 "name": "BaseBdev1", 00:16:17.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.261 "is_configured": false, 00:16:17.261 "data_offset": 0, 00:16:17.261 "data_size": 0 00:16:17.261 }, 00:16:17.261 { 00:16:17.261 "name": "BaseBdev2", 00:16:17.261 "uuid": "0606d3da-7d7c-44a3-914f-67fe6ffa3ff9", 00:16:17.261 "is_configured": true, 00:16:17.261 "data_offset": 0, 00:16:17.261 "data_size": 65536 00:16:17.261 }, 00:16:17.261 { 00:16:17.261 "name": "BaseBdev3", 00:16:17.261 "uuid": "23104e6e-8294-4c02-89c0-b02ad2e99b42", 00:16:17.261 "is_configured": true, 00:16:17.261 "data_offset": 0, 00:16:17.261 "data_size": 65536 00:16:17.261 }, 00:16:17.261 { 00:16:17.261 "name": "BaseBdev4", 00:16:17.261 "uuid": "820e7f39-fd0b-493a-ba7c-3ea8ea2da2d8", 00:16:17.261 "is_configured": true, 00:16:17.261 "data_offset": 0, 00:16:17.261 "data_size": 65536 00:16:17.261 } 00:16:17.261 ] 00:16:17.261 }' 00:16:17.261 09:15:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.261 09:15:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.839 09:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:17.839 09:15:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.839 09:15:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.839 [2024-10-15 09:15:35.442941] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:17.839 09:15:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.840 09:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:17.840 09:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.840 09:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.840 09:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.840 09:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.840 09:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.840 09:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.840 09:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.840 09:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.840 09:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.840 09:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.840 09:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.840 09:15:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.840 09:15:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.840 09:15:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.840 09:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.840 "name": "Existed_Raid", 00:16:17.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.840 "strip_size_kb": 64, 00:16:17.840 "state": "configuring", 00:16:17.840 "raid_level": "raid5f", 00:16:17.840 "superblock": false, 00:16:17.840 "num_base_bdevs": 4, 00:16:17.840 "num_base_bdevs_discovered": 2, 00:16:17.840 "num_base_bdevs_operational": 4, 00:16:17.840 "base_bdevs_list": [ 00:16:17.840 { 00:16:17.840 "name": "BaseBdev1", 00:16:17.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.840 "is_configured": false, 00:16:17.840 "data_offset": 0, 00:16:17.840 "data_size": 0 00:16:17.840 }, 00:16:17.840 { 00:16:17.840 "name": null, 00:16:17.840 "uuid": "0606d3da-7d7c-44a3-914f-67fe6ffa3ff9", 00:16:17.840 "is_configured": false, 00:16:17.840 "data_offset": 0, 00:16:17.840 "data_size": 65536 00:16:17.840 }, 00:16:17.840 { 00:16:17.840 "name": "BaseBdev3", 00:16:17.840 "uuid": "23104e6e-8294-4c02-89c0-b02ad2e99b42", 00:16:17.840 "is_configured": true, 00:16:17.840 "data_offset": 0, 00:16:17.840 "data_size": 65536 00:16:17.840 }, 00:16:17.840 { 00:16:17.840 "name": "BaseBdev4", 00:16:17.840 "uuid": "820e7f39-fd0b-493a-ba7c-3ea8ea2da2d8", 00:16:17.840 "is_configured": true, 00:16:17.840 "data_offset": 0, 00:16:17.840 "data_size": 65536 00:16:17.840 } 00:16:17.840 ] 00:16:17.840 }' 00:16:17.840 09:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.840 09:15:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.101 09:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.101 09:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:18.101 09:15:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.101 09:15:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.101 09:15:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.101 09:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:18.101 09:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:18.101 09:15:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.101 09:15:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.361 [2024-10-15 09:15:35.998335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:18.361 BaseBdev1 00:16:18.361 09:15:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.361 09:15:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:18.361 09:15:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:18.361 09:15:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:18.361 09:15:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:18.361 09:15:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:18.361 09:15:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:18.361 09:15:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:18.361 09:15:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.361 09:15:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.361 09:15:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.361 09:15:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:18.361 09:15:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.361 09:15:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.361 [ 00:16:18.361 { 00:16:18.361 "name": "BaseBdev1", 00:16:18.361 "aliases": [ 00:16:18.361 "d58b5ae0-519a-474e-bb97-4b80bbaa57ce" 00:16:18.361 ], 00:16:18.361 "product_name": "Malloc disk", 00:16:18.361 "block_size": 512, 00:16:18.361 "num_blocks": 65536, 00:16:18.361 "uuid": "d58b5ae0-519a-474e-bb97-4b80bbaa57ce", 00:16:18.361 "assigned_rate_limits": { 00:16:18.361 "rw_ios_per_sec": 0, 00:16:18.361 "rw_mbytes_per_sec": 0, 00:16:18.361 "r_mbytes_per_sec": 0, 00:16:18.361 "w_mbytes_per_sec": 0 00:16:18.361 }, 00:16:18.361 "claimed": true, 00:16:18.361 "claim_type": "exclusive_write", 00:16:18.361 "zoned": false, 00:16:18.361 "supported_io_types": { 00:16:18.361 "read": true, 00:16:18.361 "write": true, 00:16:18.361 "unmap": true, 00:16:18.361 "flush": true, 00:16:18.361 "reset": true, 00:16:18.362 "nvme_admin": false, 00:16:18.362 "nvme_io": false, 00:16:18.362 "nvme_io_md": false, 00:16:18.362 "write_zeroes": true, 00:16:18.362 "zcopy": true, 00:16:18.362 "get_zone_info": false, 00:16:18.362 "zone_management": false, 00:16:18.362 "zone_append": false, 00:16:18.362 "compare": false, 00:16:18.362 "compare_and_write": false, 00:16:18.362 "abort": true, 00:16:18.362 "seek_hole": false, 00:16:18.362 "seek_data": false, 00:16:18.362 "copy": true, 00:16:18.362 "nvme_iov_md": false 00:16:18.362 }, 00:16:18.362 "memory_domains": [ 00:16:18.362 { 00:16:18.362 "dma_device_id": "system", 00:16:18.362 "dma_device_type": 1 00:16:18.362 }, 00:16:18.362 { 00:16:18.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.362 "dma_device_type": 2 00:16:18.362 } 00:16:18.362 ], 00:16:18.362 "driver_specific": {} 00:16:18.362 } 00:16:18.362 ] 00:16:18.362 09:15:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.362 09:15:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:18.362 09:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:18.362 09:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.362 09:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.362 09:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.362 09:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.362 09:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:18.362 09:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.362 09:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.362 09:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.362 09:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.362 09:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.362 09:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.362 09:15:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.362 09:15:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.362 09:15:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.362 09:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.362 "name": "Existed_Raid", 00:16:18.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.362 "strip_size_kb": 64, 00:16:18.362 "state": "configuring", 00:16:18.362 "raid_level": "raid5f", 00:16:18.362 "superblock": false, 00:16:18.362 "num_base_bdevs": 4, 00:16:18.362 "num_base_bdevs_discovered": 3, 00:16:18.362 "num_base_bdevs_operational": 4, 00:16:18.362 "base_bdevs_list": [ 00:16:18.362 { 00:16:18.362 "name": "BaseBdev1", 00:16:18.362 "uuid": "d58b5ae0-519a-474e-bb97-4b80bbaa57ce", 00:16:18.362 "is_configured": true, 00:16:18.362 "data_offset": 0, 00:16:18.362 "data_size": 65536 00:16:18.362 }, 00:16:18.362 { 00:16:18.362 "name": null, 00:16:18.362 "uuid": "0606d3da-7d7c-44a3-914f-67fe6ffa3ff9", 00:16:18.362 "is_configured": false, 00:16:18.362 "data_offset": 0, 00:16:18.362 "data_size": 65536 00:16:18.362 }, 00:16:18.362 { 00:16:18.362 "name": "BaseBdev3", 00:16:18.362 "uuid": "23104e6e-8294-4c02-89c0-b02ad2e99b42", 00:16:18.362 "is_configured": true, 00:16:18.362 "data_offset": 0, 00:16:18.362 "data_size": 65536 00:16:18.362 }, 00:16:18.362 { 00:16:18.362 "name": "BaseBdev4", 00:16:18.362 "uuid": "820e7f39-fd0b-493a-ba7c-3ea8ea2da2d8", 00:16:18.362 "is_configured": true, 00:16:18.362 "data_offset": 0, 00:16:18.362 "data_size": 65536 00:16:18.362 } 00:16:18.362 ] 00:16:18.362 }' 00:16:18.362 09:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.362 09:15:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.621 09:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.621 09:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:18.621 09:15:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.621 09:15:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.622 09:15:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.881 09:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:18.881 09:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:18.881 09:15:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.881 09:15:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.881 [2024-10-15 09:15:36.553650] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:18.881 09:15:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.882 09:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:18.882 09:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.882 09:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.882 09:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.882 09:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.882 09:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:18.882 09:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.882 09:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.882 09:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.882 09:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.882 09:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.882 09:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.882 09:15:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.882 09:15:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.882 09:15:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.882 09:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.882 "name": "Existed_Raid", 00:16:18.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.882 "strip_size_kb": 64, 00:16:18.882 "state": "configuring", 00:16:18.882 "raid_level": "raid5f", 00:16:18.882 "superblock": false, 00:16:18.882 "num_base_bdevs": 4, 00:16:18.882 "num_base_bdevs_discovered": 2, 00:16:18.882 "num_base_bdevs_operational": 4, 00:16:18.882 "base_bdevs_list": [ 00:16:18.882 { 00:16:18.882 "name": "BaseBdev1", 00:16:18.882 "uuid": "d58b5ae0-519a-474e-bb97-4b80bbaa57ce", 00:16:18.882 "is_configured": true, 00:16:18.882 "data_offset": 0, 00:16:18.882 "data_size": 65536 00:16:18.882 }, 00:16:18.882 { 00:16:18.882 "name": null, 00:16:18.882 "uuid": "0606d3da-7d7c-44a3-914f-67fe6ffa3ff9", 00:16:18.882 "is_configured": false, 00:16:18.882 "data_offset": 0, 00:16:18.882 "data_size": 65536 00:16:18.882 }, 00:16:18.882 { 00:16:18.882 "name": null, 00:16:18.882 "uuid": "23104e6e-8294-4c02-89c0-b02ad2e99b42", 00:16:18.882 "is_configured": false, 00:16:18.882 "data_offset": 0, 00:16:18.882 "data_size": 65536 00:16:18.882 }, 00:16:18.882 { 00:16:18.882 "name": "BaseBdev4", 00:16:18.882 "uuid": "820e7f39-fd0b-493a-ba7c-3ea8ea2da2d8", 00:16:18.882 "is_configured": true, 00:16:18.882 "data_offset": 0, 00:16:18.882 "data_size": 65536 00:16:18.882 } 00:16:18.882 ] 00:16:18.882 }' 00:16:18.882 09:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.882 09:15:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.141 09:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.141 09:15:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:19.141 09:15:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.141 09:15:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.141 09:15:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.141 09:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:19.141 09:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:19.141 09:15:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.141 09:15:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.401 [2024-10-15 09:15:37.040850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:19.401 09:15:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.401 09:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:19.401 09:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.401 09:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:19.401 09:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.401 09:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.401 09:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:19.401 09:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.401 09:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.401 09:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.401 09:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.401 09:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.401 09:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.401 09:15:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.401 09:15:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.401 09:15:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.401 09:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.401 "name": "Existed_Raid", 00:16:19.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.401 "strip_size_kb": 64, 00:16:19.401 "state": "configuring", 00:16:19.401 "raid_level": "raid5f", 00:16:19.401 "superblock": false, 00:16:19.401 "num_base_bdevs": 4, 00:16:19.402 "num_base_bdevs_discovered": 3, 00:16:19.402 "num_base_bdevs_operational": 4, 00:16:19.402 "base_bdevs_list": [ 00:16:19.402 { 00:16:19.402 "name": "BaseBdev1", 00:16:19.402 "uuid": "d58b5ae0-519a-474e-bb97-4b80bbaa57ce", 00:16:19.402 "is_configured": true, 00:16:19.402 "data_offset": 0, 00:16:19.402 "data_size": 65536 00:16:19.402 }, 00:16:19.402 { 00:16:19.402 "name": null, 00:16:19.402 "uuid": "0606d3da-7d7c-44a3-914f-67fe6ffa3ff9", 00:16:19.402 "is_configured": false, 00:16:19.402 "data_offset": 0, 00:16:19.402 "data_size": 65536 00:16:19.402 }, 00:16:19.402 { 00:16:19.402 "name": "BaseBdev3", 00:16:19.402 "uuid": "23104e6e-8294-4c02-89c0-b02ad2e99b42", 00:16:19.402 "is_configured": true, 00:16:19.402 "data_offset": 0, 00:16:19.402 "data_size": 65536 00:16:19.402 }, 00:16:19.402 { 00:16:19.402 "name": "BaseBdev4", 00:16:19.402 "uuid": "820e7f39-fd0b-493a-ba7c-3ea8ea2da2d8", 00:16:19.402 "is_configured": true, 00:16:19.402 "data_offset": 0, 00:16:19.402 "data_size": 65536 00:16:19.402 } 00:16:19.402 ] 00:16:19.402 }' 00:16:19.402 09:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.402 09:15:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.973 09:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.973 09:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:19.973 09:15:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.973 09:15:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.973 09:15:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.973 09:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:19.973 09:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:19.973 09:15:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.973 09:15:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.973 [2024-10-15 09:15:37.595961] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:19.973 09:15:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.973 09:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:19.973 09:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.973 09:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:19.973 09:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.973 09:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.973 09:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:19.973 09:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.973 09:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.973 09:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.973 09:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.973 09:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.973 09:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.973 09:15:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.973 09:15:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.973 09:15:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.973 09:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.973 "name": "Existed_Raid", 00:16:19.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.973 "strip_size_kb": 64, 00:16:19.973 "state": "configuring", 00:16:19.973 "raid_level": "raid5f", 00:16:19.973 "superblock": false, 00:16:19.973 "num_base_bdevs": 4, 00:16:19.973 "num_base_bdevs_discovered": 2, 00:16:19.973 "num_base_bdevs_operational": 4, 00:16:19.973 "base_bdevs_list": [ 00:16:19.973 { 00:16:19.973 "name": null, 00:16:19.973 "uuid": "d58b5ae0-519a-474e-bb97-4b80bbaa57ce", 00:16:19.973 "is_configured": false, 00:16:19.973 "data_offset": 0, 00:16:19.973 "data_size": 65536 00:16:19.973 }, 00:16:19.973 { 00:16:19.973 "name": null, 00:16:19.973 "uuid": "0606d3da-7d7c-44a3-914f-67fe6ffa3ff9", 00:16:19.973 "is_configured": false, 00:16:19.973 "data_offset": 0, 00:16:19.973 "data_size": 65536 00:16:19.973 }, 00:16:19.973 { 00:16:19.973 "name": "BaseBdev3", 00:16:19.973 "uuid": "23104e6e-8294-4c02-89c0-b02ad2e99b42", 00:16:19.973 "is_configured": true, 00:16:19.973 "data_offset": 0, 00:16:19.973 "data_size": 65536 00:16:19.973 }, 00:16:19.973 { 00:16:19.973 "name": "BaseBdev4", 00:16:19.973 "uuid": "820e7f39-fd0b-493a-ba7c-3ea8ea2da2d8", 00:16:19.973 "is_configured": true, 00:16:19.973 "data_offset": 0, 00:16:19.973 "data_size": 65536 00:16:19.973 } 00:16:19.973 ] 00:16:19.973 }' 00:16:19.973 09:15:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.973 09:15:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.289 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:20.289 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.289 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.289 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.549 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.549 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:20.549 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:20.549 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.549 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.549 [2024-10-15 09:15:38.198424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:20.549 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.549 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:20.549 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.549 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:20.549 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.549 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.549 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:20.549 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.549 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.549 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.549 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.549 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.549 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.549 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.549 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.549 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.549 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.549 "name": "Existed_Raid", 00:16:20.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.549 "strip_size_kb": 64, 00:16:20.549 "state": "configuring", 00:16:20.549 "raid_level": "raid5f", 00:16:20.549 "superblock": false, 00:16:20.549 "num_base_bdevs": 4, 00:16:20.549 "num_base_bdevs_discovered": 3, 00:16:20.549 "num_base_bdevs_operational": 4, 00:16:20.549 "base_bdevs_list": [ 00:16:20.549 { 00:16:20.549 "name": null, 00:16:20.549 "uuid": "d58b5ae0-519a-474e-bb97-4b80bbaa57ce", 00:16:20.549 "is_configured": false, 00:16:20.549 "data_offset": 0, 00:16:20.549 "data_size": 65536 00:16:20.549 }, 00:16:20.549 { 00:16:20.549 "name": "BaseBdev2", 00:16:20.549 "uuid": "0606d3da-7d7c-44a3-914f-67fe6ffa3ff9", 00:16:20.549 "is_configured": true, 00:16:20.549 "data_offset": 0, 00:16:20.549 "data_size": 65536 00:16:20.549 }, 00:16:20.549 { 00:16:20.549 "name": "BaseBdev3", 00:16:20.549 "uuid": "23104e6e-8294-4c02-89c0-b02ad2e99b42", 00:16:20.549 "is_configured": true, 00:16:20.549 "data_offset": 0, 00:16:20.549 "data_size": 65536 00:16:20.549 }, 00:16:20.549 { 00:16:20.549 "name": "BaseBdev4", 00:16:20.549 "uuid": "820e7f39-fd0b-493a-ba7c-3ea8ea2da2d8", 00:16:20.549 "is_configured": true, 00:16:20.549 "data_offset": 0, 00:16:20.549 "data_size": 65536 00:16:20.549 } 00:16:20.549 ] 00:16:20.549 }' 00:16:20.550 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.550 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d58b5ae0-519a-474e-bb97-4b80bbaa57ce 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.119 [2024-10-15 09:15:38.861655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:21.119 [2024-10-15 09:15:38.861754] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:21.119 [2024-10-15 09:15:38.861766] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:21.119 [2024-10-15 09:15:38.862063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:21.119 [2024-10-15 09:15:38.871098] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:21.119 [2024-10-15 09:15:38.871127] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:21.119 [2024-10-15 09:15:38.871475] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.119 NewBaseBdev 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.119 [ 00:16:21.119 { 00:16:21.119 "name": "NewBaseBdev", 00:16:21.119 "aliases": [ 00:16:21.119 "d58b5ae0-519a-474e-bb97-4b80bbaa57ce" 00:16:21.119 ], 00:16:21.119 "product_name": "Malloc disk", 00:16:21.119 "block_size": 512, 00:16:21.119 "num_blocks": 65536, 00:16:21.119 "uuid": "d58b5ae0-519a-474e-bb97-4b80bbaa57ce", 00:16:21.119 "assigned_rate_limits": { 00:16:21.119 "rw_ios_per_sec": 0, 00:16:21.119 "rw_mbytes_per_sec": 0, 00:16:21.119 "r_mbytes_per_sec": 0, 00:16:21.119 "w_mbytes_per_sec": 0 00:16:21.119 }, 00:16:21.119 "claimed": true, 00:16:21.119 "claim_type": "exclusive_write", 00:16:21.119 "zoned": false, 00:16:21.119 "supported_io_types": { 00:16:21.119 "read": true, 00:16:21.119 "write": true, 00:16:21.119 "unmap": true, 00:16:21.119 "flush": true, 00:16:21.119 "reset": true, 00:16:21.119 "nvme_admin": false, 00:16:21.119 "nvme_io": false, 00:16:21.119 "nvme_io_md": false, 00:16:21.119 "write_zeroes": true, 00:16:21.119 "zcopy": true, 00:16:21.119 "get_zone_info": false, 00:16:21.119 "zone_management": false, 00:16:21.119 "zone_append": false, 00:16:21.119 "compare": false, 00:16:21.119 "compare_and_write": false, 00:16:21.119 "abort": true, 00:16:21.119 "seek_hole": false, 00:16:21.119 "seek_data": false, 00:16:21.119 "copy": true, 00:16:21.119 "nvme_iov_md": false 00:16:21.119 }, 00:16:21.119 "memory_domains": [ 00:16:21.119 { 00:16:21.119 "dma_device_id": "system", 00:16:21.119 "dma_device_type": 1 00:16:21.119 }, 00:16:21.119 { 00:16:21.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.119 "dma_device_type": 2 00:16:21.119 } 00:16:21.119 ], 00:16:21.119 "driver_specific": {} 00:16:21.119 } 00:16:21.119 ] 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.119 "name": "Existed_Raid", 00:16:21.119 "uuid": "7f626e6f-cdf0-4d59-ad9d-f38fa8ad1f26", 00:16:21.119 "strip_size_kb": 64, 00:16:21.119 "state": "online", 00:16:21.119 "raid_level": "raid5f", 00:16:21.119 "superblock": false, 00:16:21.119 "num_base_bdevs": 4, 00:16:21.119 "num_base_bdevs_discovered": 4, 00:16:21.119 "num_base_bdevs_operational": 4, 00:16:21.119 "base_bdevs_list": [ 00:16:21.119 { 00:16:21.119 "name": "NewBaseBdev", 00:16:21.119 "uuid": "d58b5ae0-519a-474e-bb97-4b80bbaa57ce", 00:16:21.119 "is_configured": true, 00:16:21.119 "data_offset": 0, 00:16:21.119 "data_size": 65536 00:16:21.119 }, 00:16:21.119 { 00:16:21.119 "name": "BaseBdev2", 00:16:21.119 "uuid": "0606d3da-7d7c-44a3-914f-67fe6ffa3ff9", 00:16:21.119 "is_configured": true, 00:16:21.119 "data_offset": 0, 00:16:21.119 "data_size": 65536 00:16:21.119 }, 00:16:21.119 { 00:16:21.119 "name": "BaseBdev3", 00:16:21.119 "uuid": "23104e6e-8294-4c02-89c0-b02ad2e99b42", 00:16:21.119 "is_configured": true, 00:16:21.119 "data_offset": 0, 00:16:21.119 "data_size": 65536 00:16:21.119 }, 00:16:21.119 { 00:16:21.119 "name": "BaseBdev4", 00:16:21.119 "uuid": "820e7f39-fd0b-493a-ba7c-3ea8ea2da2d8", 00:16:21.119 "is_configured": true, 00:16:21.119 "data_offset": 0, 00:16:21.119 "data_size": 65536 00:16:21.119 } 00:16:21.119 ] 00:16:21.119 }' 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.119 09:15:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.687 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:21.687 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:21.687 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:21.687 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:21.687 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:21.687 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:21.687 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:21.687 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:21.687 09:15:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.687 09:15:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.687 [2024-10-15 09:15:39.404933] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:21.687 09:15:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.687 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:21.687 "name": "Existed_Raid", 00:16:21.687 "aliases": [ 00:16:21.687 "7f626e6f-cdf0-4d59-ad9d-f38fa8ad1f26" 00:16:21.687 ], 00:16:21.687 "product_name": "Raid Volume", 00:16:21.687 "block_size": 512, 00:16:21.687 "num_blocks": 196608, 00:16:21.687 "uuid": "7f626e6f-cdf0-4d59-ad9d-f38fa8ad1f26", 00:16:21.687 "assigned_rate_limits": { 00:16:21.687 "rw_ios_per_sec": 0, 00:16:21.687 "rw_mbytes_per_sec": 0, 00:16:21.687 "r_mbytes_per_sec": 0, 00:16:21.687 "w_mbytes_per_sec": 0 00:16:21.687 }, 00:16:21.687 "claimed": false, 00:16:21.687 "zoned": false, 00:16:21.687 "supported_io_types": { 00:16:21.687 "read": true, 00:16:21.687 "write": true, 00:16:21.687 "unmap": false, 00:16:21.687 "flush": false, 00:16:21.687 "reset": true, 00:16:21.687 "nvme_admin": false, 00:16:21.687 "nvme_io": false, 00:16:21.687 "nvme_io_md": false, 00:16:21.687 "write_zeroes": true, 00:16:21.687 "zcopy": false, 00:16:21.687 "get_zone_info": false, 00:16:21.687 "zone_management": false, 00:16:21.687 "zone_append": false, 00:16:21.687 "compare": false, 00:16:21.687 "compare_and_write": false, 00:16:21.687 "abort": false, 00:16:21.687 "seek_hole": false, 00:16:21.687 "seek_data": false, 00:16:21.687 "copy": false, 00:16:21.687 "nvme_iov_md": false 00:16:21.687 }, 00:16:21.687 "driver_specific": { 00:16:21.687 "raid": { 00:16:21.687 "uuid": "7f626e6f-cdf0-4d59-ad9d-f38fa8ad1f26", 00:16:21.687 "strip_size_kb": 64, 00:16:21.687 "state": "online", 00:16:21.687 "raid_level": "raid5f", 00:16:21.687 "superblock": false, 00:16:21.687 "num_base_bdevs": 4, 00:16:21.687 "num_base_bdevs_discovered": 4, 00:16:21.687 "num_base_bdevs_operational": 4, 00:16:21.687 "base_bdevs_list": [ 00:16:21.687 { 00:16:21.687 "name": "NewBaseBdev", 00:16:21.687 "uuid": "d58b5ae0-519a-474e-bb97-4b80bbaa57ce", 00:16:21.687 "is_configured": true, 00:16:21.688 "data_offset": 0, 00:16:21.688 "data_size": 65536 00:16:21.688 }, 00:16:21.688 { 00:16:21.688 "name": "BaseBdev2", 00:16:21.688 "uuid": "0606d3da-7d7c-44a3-914f-67fe6ffa3ff9", 00:16:21.688 "is_configured": true, 00:16:21.688 "data_offset": 0, 00:16:21.688 "data_size": 65536 00:16:21.688 }, 00:16:21.688 { 00:16:21.688 "name": "BaseBdev3", 00:16:21.688 "uuid": "23104e6e-8294-4c02-89c0-b02ad2e99b42", 00:16:21.688 "is_configured": true, 00:16:21.688 "data_offset": 0, 00:16:21.688 "data_size": 65536 00:16:21.688 }, 00:16:21.688 { 00:16:21.688 "name": "BaseBdev4", 00:16:21.688 "uuid": "820e7f39-fd0b-493a-ba7c-3ea8ea2da2d8", 00:16:21.688 "is_configured": true, 00:16:21.688 "data_offset": 0, 00:16:21.688 "data_size": 65536 00:16:21.688 } 00:16:21.688 ] 00:16:21.688 } 00:16:21.688 } 00:16:21.688 }' 00:16:21.688 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:21.688 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:21.688 BaseBdev2 00:16:21.688 BaseBdev3 00:16:21.688 BaseBdev4' 00:16:21.688 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.688 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:21.688 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.688 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.688 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:21.688 09:15:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.688 09:15:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.688 09:15:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.688 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:21.688 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:21.688 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.948 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.948 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:21.948 09:15:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.948 09:15:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.948 09:15:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.948 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:21.948 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:21.948 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.948 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.948 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:21.948 09:15:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.948 09:15:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.948 09:15:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.948 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:21.948 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:21.948 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.948 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:21.948 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.948 09:15:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.948 09:15:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.948 09:15:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.948 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:21.948 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:21.948 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:21.948 09:15:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.948 09:15:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.948 [2024-10-15 09:15:39.724108] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:21.948 [2024-10-15 09:15:39.724146] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:21.948 [2024-10-15 09:15:39.724252] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:21.948 [2024-10-15 09:15:39.724582] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:21.948 [2024-10-15 09:15:39.724594] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:21.948 09:15:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.948 09:15:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83071 00:16:21.948 09:15:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 83071 ']' 00:16:21.948 09:15:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 83071 00:16:21.948 09:15:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:16:21.948 09:15:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:21.948 09:15:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83071 00:16:21.948 09:15:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:21.948 killing process with pid 83071 00:16:21.949 09:15:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:21.949 09:15:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83071' 00:16:21.949 09:15:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 83071 00:16:21.949 [2024-10-15 09:15:39.763712] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:21.949 09:15:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 83071 00:16:22.515 [2024-10-15 09:15:40.231378] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:23.924 00:16:23.924 real 0m12.464s 00:16:23.924 user 0m19.709s 00:16:23.924 sys 0m2.266s 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:23.924 ************************************ 00:16:23.924 END TEST raid5f_state_function_test 00:16:23.924 ************************************ 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.924 09:15:41 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:16:23.924 09:15:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:23.924 09:15:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:23.924 09:15:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:23.924 ************************************ 00:16:23.924 START TEST raid5f_state_function_test_sb 00:16:23.924 ************************************ 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83748 00:16:23.924 Process raid pid: 83748 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83748' 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83748 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 83748 ']' 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:23.924 09:15:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.924 [2024-10-15 09:15:41.684193] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:16:23.924 [2024-10-15 09:15:41.684422] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.183 [2024-10-15 09:15:41.853023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.183 [2024-10-15 09:15:41.980123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.442 [2024-10-15 09:15:42.192945] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:24.442 [2024-10-15 09:15:42.193086] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:24.702 09:15:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:24.702 09:15:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:16:24.702 09:15:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:24.702 09:15:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.702 09:15:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.702 [2024-10-15 09:15:42.552673] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:24.702 [2024-10-15 09:15:42.552740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:24.702 [2024-10-15 09:15:42.552752] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:24.702 [2024-10-15 09:15:42.552761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:24.702 [2024-10-15 09:15:42.552772] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:24.702 [2024-10-15 09:15:42.552781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:24.702 [2024-10-15 09:15:42.552787] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:24.702 [2024-10-15 09:15:42.552795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:24.702 09:15:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.702 09:15:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:24.702 09:15:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.702 09:15:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.702 09:15:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.702 09:15:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.702 09:15:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:24.702 09:15:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.702 09:15:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.702 09:15:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.702 09:15:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.702 09:15:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.702 09:15:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.702 09:15:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.702 09:15:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.702 09:15:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.961 09:15:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.961 "name": "Existed_Raid", 00:16:24.961 "uuid": "3eb76967-7bf0-4996-ab41-ad72654239a6", 00:16:24.961 "strip_size_kb": 64, 00:16:24.961 "state": "configuring", 00:16:24.961 "raid_level": "raid5f", 00:16:24.961 "superblock": true, 00:16:24.961 "num_base_bdevs": 4, 00:16:24.961 "num_base_bdevs_discovered": 0, 00:16:24.961 "num_base_bdevs_operational": 4, 00:16:24.961 "base_bdevs_list": [ 00:16:24.961 { 00:16:24.961 "name": "BaseBdev1", 00:16:24.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.961 "is_configured": false, 00:16:24.961 "data_offset": 0, 00:16:24.961 "data_size": 0 00:16:24.961 }, 00:16:24.961 { 00:16:24.961 "name": "BaseBdev2", 00:16:24.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.961 "is_configured": false, 00:16:24.961 "data_offset": 0, 00:16:24.961 "data_size": 0 00:16:24.961 }, 00:16:24.961 { 00:16:24.961 "name": "BaseBdev3", 00:16:24.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.961 "is_configured": false, 00:16:24.961 "data_offset": 0, 00:16:24.961 "data_size": 0 00:16:24.961 }, 00:16:24.961 { 00:16:24.961 "name": "BaseBdev4", 00:16:24.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.961 "is_configured": false, 00:16:24.961 "data_offset": 0, 00:16:24.961 "data_size": 0 00:16:24.961 } 00:16:24.961 ] 00:16:24.961 }' 00:16:24.961 09:15:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.961 09:15:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.220 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:25.220 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.220 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.220 [2024-10-15 09:15:43.007822] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:25.220 [2024-10-15 09:15:43.007909] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:25.220 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.220 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:25.220 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.220 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.220 [2024-10-15 09:15:43.015832] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:25.220 [2024-10-15 09:15:43.015909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:25.220 [2024-10-15 09:15:43.015937] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:25.220 [2024-10-15 09:15:43.015959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:25.220 [2024-10-15 09:15:43.015977] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:25.220 [2024-10-15 09:15:43.015997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:25.220 [2024-10-15 09:15:43.016014] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:25.220 [2024-10-15 09:15:43.016034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:25.220 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.220 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:25.220 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.220 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.220 [2024-10-15 09:15:43.058615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:25.220 BaseBdev1 00:16:25.220 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.221 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:25.221 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:25.221 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:25.221 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:25.221 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:25.221 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:25.221 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:25.221 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.221 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.221 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.221 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:25.221 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.221 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.221 [ 00:16:25.221 { 00:16:25.221 "name": "BaseBdev1", 00:16:25.221 "aliases": [ 00:16:25.221 "d312b44e-927d-4929-9c8c-3da95ae591c0" 00:16:25.221 ], 00:16:25.221 "product_name": "Malloc disk", 00:16:25.221 "block_size": 512, 00:16:25.221 "num_blocks": 65536, 00:16:25.221 "uuid": "d312b44e-927d-4929-9c8c-3da95ae591c0", 00:16:25.221 "assigned_rate_limits": { 00:16:25.221 "rw_ios_per_sec": 0, 00:16:25.221 "rw_mbytes_per_sec": 0, 00:16:25.221 "r_mbytes_per_sec": 0, 00:16:25.221 "w_mbytes_per_sec": 0 00:16:25.221 }, 00:16:25.221 "claimed": true, 00:16:25.221 "claim_type": "exclusive_write", 00:16:25.221 "zoned": false, 00:16:25.221 "supported_io_types": { 00:16:25.221 "read": true, 00:16:25.221 "write": true, 00:16:25.221 "unmap": true, 00:16:25.221 "flush": true, 00:16:25.221 "reset": true, 00:16:25.221 "nvme_admin": false, 00:16:25.221 "nvme_io": false, 00:16:25.221 "nvme_io_md": false, 00:16:25.221 "write_zeroes": true, 00:16:25.221 "zcopy": true, 00:16:25.221 "get_zone_info": false, 00:16:25.221 "zone_management": false, 00:16:25.221 "zone_append": false, 00:16:25.221 "compare": false, 00:16:25.221 "compare_and_write": false, 00:16:25.221 "abort": true, 00:16:25.221 "seek_hole": false, 00:16:25.221 "seek_data": false, 00:16:25.221 "copy": true, 00:16:25.221 "nvme_iov_md": false 00:16:25.221 }, 00:16:25.221 "memory_domains": [ 00:16:25.221 { 00:16:25.221 "dma_device_id": "system", 00:16:25.221 "dma_device_type": 1 00:16:25.221 }, 00:16:25.221 { 00:16:25.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.221 "dma_device_type": 2 00:16:25.221 } 00:16:25.221 ], 00:16:25.221 "driver_specific": {} 00:16:25.221 } 00:16:25.221 ] 00:16:25.221 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.221 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:25.221 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:25.221 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.221 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.221 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.221 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.221 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:25.221 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.221 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.221 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.221 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.221 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.221 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.221 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.221 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.481 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.481 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.481 "name": "Existed_Raid", 00:16:25.481 "uuid": "cbbee62c-4d60-4816-80d0-26cbf8010885", 00:16:25.481 "strip_size_kb": 64, 00:16:25.481 "state": "configuring", 00:16:25.481 "raid_level": "raid5f", 00:16:25.481 "superblock": true, 00:16:25.481 "num_base_bdevs": 4, 00:16:25.481 "num_base_bdevs_discovered": 1, 00:16:25.481 "num_base_bdevs_operational": 4, 00:16:25.481 "base_bdevs_list": [ 00:16:25.481 { 00:16:25.481 "name": "BaseBdev1", 00:16:25.481 "uuid": "d312b44e-927d-4929-9c8c-3da95ae591c0", 00:16:25.481 "is_configured": true, 00:16:25.481 "data_offset": 2048, 00:16:25.481 "data_size": 63488 00:16:25.481 }, 00:16:25.481 { 00:16:25.481 "name": "BaseBdev2", 00:16:25.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.481 "is_configured": false, 00:16:25.481 "data_offset": 0, 00:16:25.481 "data_size": 0 00:16:25.481 }, 00:16:25.481 { 00:16:25.481 "name": "BaseBdev3", 00:16:25.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.481 "is_configured": false, 00:16:25.481 "data_offset": 0, 00:16:25.481 "data_size": 0 00:16:25.481 }, 00:16:25.481 { 00:16:25.481 "name": "BaseBdev4", 00:16:25.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.481 "is_configured": false, 00:16:25.481 "data_offset": 0, 00:16:25.481 "data_size": 0 00:16:25.481 } 00:16:25.481 ] 00:16:25.481 }' 00:16:25.481 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.481 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.741 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:25.741 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.741 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.741 [2024-10-15 09:15:43.577789] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:25.741 [2024-10-15 09:15:43.577849] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:25.741 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.741 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:25.741 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.741 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.741 [2024-10-15 09:15:43.589842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:25.741 [2024-10-15 09:15:43.591804] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:25.741 [2024-10-15 09:15:43.591844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:25.741 [2024-10-15 09:15:43.591854] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:25.741 [2024-10-15 09:15:43.591865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:25.741 [2024-10-15 09:15:43.591871] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:25.741 [2024-10-15 09:15:43.591880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:25.741 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.741 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:25.741 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:25.741 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:25.741 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.741 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.741 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.741 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.741 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:25.741 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.741 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.741 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.741 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.741 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.741 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.741 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.741 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.741 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.741 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.741 "name": "Existed_Raid", 00:16:25.741 "uuid": "e035e70b-2fc8-4980-8984-6bfe7c417026", 00:16:25.741 "strip_size_kb": 64, 00:16:25.741 "state": "configuring", 00:16:25.741 "raid_level": "raid5f", 00:16:25.741 "superblock": true, 00:16:25.741 "num_base_bdevs": 4, 00:16:25.741 "num_base_bdevs_discovered": 1, 00:16:25.741 "num_base_bdevs_operational": 4, 00:16:25.741 "base_bdevs_list": [ 00:16:25.741 { 00:16:25.741 "name": "BaseBdev1", 00:16:25.741 "uuid": "d312b44e-927d-4929-9c8c-3da95ae591c0", 00:16:25.741 "is_configured": true, 00:16:25.741 "data_offset": 2048, 00:16:25.741 "data_size": 63488 00:16:25.741 }, 00:16:25.741 { 00:16:25.741 "name": "BaseBdev2", 00:16:25.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.741 "is_configured": false, 00:16:25.741 "data_offset": 0, 00:16:25.741 "data_size": 0 00:16:25.741 }, 00:16:25.741 { 00:16:25.741 "name": "BaseBdev3", 00:16:25.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.741 "is_configured": false, 00:16:25.741 "data_offset": 0, 00:16:25.741 "data_size": 0 00:16:25.741 }, 00:16:25.741 { 00:16:25.741 "name": "BaseBdev4", 00:16:25.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.741 "is_configured": false, 00:16:25.741 "data_offset": 0, 00:16:25.741 "data_size": 0 00:16:25.741 } 00:16:25.741 ] 00:16:25.741 }' 00:16:25.741 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.741 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.394 09:15:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:26.394 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.394 09:15:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.394 [2024-10-15 09:15:44.045362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:26.394 BaseBdev2 00:16:26.394 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.394 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:26.394 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:26.394 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:26.394 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:26.394 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:26.394 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:26.394 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:26.394 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.395 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.395 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.395 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:26.395 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.395 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.395 [ 00:16:26.395 { 00:16:26.395 "name": "BaseBdev2", 00:16:26.395 "aliases": [ 00:16:26.395 "bdfce4f1-ed15-4c70-9e7a-73a9edc941fc" 00:16:26.395 ], 00:16:26.395 "product_name": "Malloc disk", 00:16:26.395 "block_size": 512, 00:16:26.395 "num_blocks": 65536, 00:16:26.395 "uuid": "bdfce4f1-ed15-4c70-9e7a-73a9edc941fc", 00:16:26.395 "assigned_rate_limits": { 00:16:26.395 "rw_ios_per_sec": 0, 00:16:26.395 "rw_mbytes_per_sec": 0, 00:16:26.395 "r_mbytes_per_sec": 0, 00:16:26.395 "w_mbytes_per_sec": 0 00:16:26.395 }, 00:16:26.395 "claimed": true, 00:16:26.395 "claim_type": "exclusive_write", 00:16:26.395 "zoned": false, 00:16:26.395 "supported_io_types": { 00:16:26.395 "read": true, 00:16:26.395 "write": true, 00:16:26.395 "unmap": true, 00:16:26.395 "flush": true, 00:16:26.395 "reset": true, 00:16:26.395 "nvme_admin": false, 00:16:26.395 "nvme_io": false, 00:16:26.395 "nvme_io_md": false, 00:16:26.395 "write_zeroes": true, 00:16:26.395 "zcopy": true, 00:16:26.395 "get_zone_info": false, 00:16:26.395 "zone_management": false, 00:16:26.395 "zone_append": false, 00:16:26.395 "compare": false, 00:16:26.395 "compare_and_write": false, 00:16:26.395 "abort": true, 00:16:26.395 "seek_hole": false, 00:16:26.395 "seek_data": false, 00:16:26.395 "copy": true, 00:16:26.395 "nvme_iov_md": false 00:16:26.395 }, 00:16:26.395 "memory_domains": [ 00:16:26.395 { 00:16:26.395 "dma_device_id": "system", 00:16:26.395 "dma_device_type": 1 00:16:26.395 }, 00:16:26.395 { 00:16:26.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.395 "dma_device_type": 2 00:16:26.395 } 00:16:26.395 ], 00:16:26.395 "driver_specific": {} 00:16:26.395 } 00:16:26.395 ] 00:16:26.395 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.395 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:26.395 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:26.395 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:26.395 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:26.395 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.395 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:26.395 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.395 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.395 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:26.395 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.395 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.395 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.395 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.395 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.395 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.395 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.395 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.395 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.395 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.395 "name": "Existed_Raid", 00:16:26.395 "uuid": "e035e70b-2fc8-4980-8984-6bfe7c417026", 00:16:26.395 "strip_size_kb": 64, 00:16:26.395 "state": "configuring", 00:16:26.395 "raid_level": "raid5f", 00:16:26.395 "superblock": true, 00:16:26.395 "num_base_bdevs": 4, 00:16:26.395 "num_base_bdevs_discovered": 2, 00:16:26.395 "num_base_bdevs_operational": 4, 00:16:26.395 "base_bdevs_list": [ 00:16:26.395 { 00:16:26.395 "name": "BaseBdev1", 00:16:26.395 "uuid": "d312b44e-927d-4929-9c8c-3da95ae591c0", 00:16:26.395 "is_configured": true, 00:16:26.395 "data_offset": 2048, 00:16:26.395 "data_size": 63488 00:16:26.395 }, 00:16:26.395 { 00:16:26.395 "name": "BaseBdev2", 00:16:26.395 "uuid": "bdfce4f1-ed15-4c70-9e7a-73a9edc941fc", 00:16:26.395 "is_configured": true, 00:16:26.395 "data_offset": 2048, 00:16:26.395 "data_size": 63488 00:16:26.395 }, 00:16:26.395 { 00:16:26.395 "name": "BaseBdev3", 00:16:26.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.395 "is_configured": false, 00:16:26.395 "data_offset": 0, 00:16:26.395 "data_size": 0 00:16:26.395 }, 00:16:26.395 { 00:16:26.395 "name": "BaseBdev4", 00:16:26.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.395 "is_configured": false, 00:16:26.395 "data_offset": 0, 00:16:26.395 "data_size": 0 00:16:26.395 } 00:16:26.395 ] 00:16:26.395 }' 00:16:26.395 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.395 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.654 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:26.654 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.654 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.914 [2024-10-15 09:15:44.570712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:26.914 BaseBdev3 00:16:26.914 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.914 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:26.914 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:26.914 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:26.914 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:26.914 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:26.914 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:26.914 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:26.914 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.914 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.914 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.914 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:26.914 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.914 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.914 [ 00:16:26.914 { 00:16:26.914 "name": "BaseBdev3", 00:16:26.914 "aliases": [ 00:16:26.914 "effd0932-d1a5-4b99-a7e0-fc0d3dab3a93" 00:16:26.914 ], 00:16:26.914 "product_name": "Malloc disk", 00:16:26.914 "block_size": 512, 00:16:26.914 "num_blocks": 65536, 00:16:26.914 "uuid": "effd0932-d1a5-4b99-a7e0-fc0d3dab3a93", 00:16:26.914 "assigned_rate_limits": { 00:16:26.914 "rw_ios_per_sec": 0, 00:16:26.914 "rw_mbytes_per_sec": 0, 00:16:26.914 "r_mbytes_per_sec": 0, 00:16:26.914 "w_mbytes_per_sec": 0 00:16:26.914 }, 00:16:26.914 "claimed": true, 00:16:26.914 "claim_type": "exclusive_write", 00:16:26.914 "zoned": false, 00:16:26.914 "supported_io_types": { 00:16:26.914 "read": true, 00:16:26.914 "write": true, 00:16:26.914 "unmap": true, 00:16:26.914 "flush": true, 00:16:26.914 "reset": true, 00:16:26.914 "nvme_admin": false, 00:16:26.914 "nvme_io": false, 00:16:26.914 "nvme_io_md": false, 00:16:26.914 "write_zeroes": true, 00:16:26.914 "zcopy": true, 00:16:26.914 "get_zone_info": false, 00:16:26.914 "zone_management": false, 00:16:26.914 "zone_append": false, 00:16:26.914 "compare": false, 00:16:26.914 "compare_and_write": false, 00:16:26.914 "abort": true, 00:16:26.914 "seek_hole": false, 00:16:26.914 "seek_data": false, 00:16:26.914 "copy": true, 00:16:26.914 "nvme_iov_md": false 00:16:26.914 }, 00:16:26.914 "memory_domains": [ 00:16:26.914 { 00:16:26.914 "dma_device_id": "system", 00:16:26.914 "dma_device_type": 1 00:16:26.914 }, 00:16:26.914 { 00:16:26.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.914 "dma_device_type": 2 00:16:26.914 } 00:16:26.914 ], 00:16:26.914 "driver_specific": {} 00:16:26.914 } 00:16:26.914 ] 00:16:26.914 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.914 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:26.914 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:26.914 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:26.914 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:26.914 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.914 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:26.914 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.914 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.914 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:26.914 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.914 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.914 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.915 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.915 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.915 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.915 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.915 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.915 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.915 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.915 "name": "Existed_Raid", 00:16:26.915 "uuid": "e035e70b-2fc8-4980-8984-6bfe7c417026", 00:16:26.915 "strip_size_kb": 64, 00:16:26.915 "state": "configuring", 00:16:26.915 "raid_level": "raid5f", 00:16:26.915 "superblock": true, 00:16:26.915 "num_base_bdevs": 4, 00:16:26.915 "num_base_bdevs_discovered": 3, 00:16:26.915 "num_base_bdevs_operational": 4, 00:16:26.915 "base_bdevs_list": [ 00:16:26.915 { 00:16:26.915 "name": "BaseBdev1", 00:16:26.915 "uuid": "d312b44e-927d-4929-9c8c-3da95ae591c0", 00:16:26.915 "is_configured": true, 00:16:26.915 "data_offset": 2048, 00:16:26.915 "data_size": 63488 00:16:26.915 }, 00:16:26.915 { 00:16:26.915 "name": "BaseBdev2", 00:16:26.915 "uuid": "bdfce4f1-ed15-4c70-9e7a-73a9edc941fc", 00:16:26.915 "is_configured": true, 00:16:26.915 "data_offset": 2048, 00:16:26.915 "data_size": 63488 00:16:26.915 }, 00:16:26.915 { 00:16:26.915 "name": "BaseBdev3", 00:16:26.915 "uuid": "effd0932-d1a5-4b99-a7e0-fc0d3dab3a93", 00:16:26.915 "is_configured": true, 00:16:26.915 "data_offset": 2048, 00:16:26.915 "data_size": 63488 00:16:26.915 }, 00:16:26.915 { 00:16:26.915 "name": "BaseBdev4", 00:16:26.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.915 "is_configured": false, 00:16:26.915 "data_offset": 0, 00:16:26.915 "data_size": 0 00:16:26.915 } 00:16:26.915 ] 00:16:26.915 }' 00:16:26.915 09:15:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.915 09:15:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.175 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:27.175 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.175 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.435 [2024-10-15 09:15:45.083118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:27.435 [2024-10-15 09:15:45.083409] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:27.435 [2024-10-15 09:15:45.083424] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:27.435 [2024-10-15 09:15:45.083690] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:27.435 BaseBdev4 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.435 [2024-10-15 09:15:45.091592] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:27.435 [2024-10-15 09:15:45.091673] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:27.435 [2024-10-15 09:15:45.092068] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.435 [ 00:16:27.435 { 00:16:27.435 "name": "BaseBdev4", 00:16:27.435 "aliases": [ 00:16:27.435 "ea433f9d-dbae-4006-bda0-d6fa0e69f94a" 00:16:27.435 ], 00:16:27.435 "product_name": "Malloc disk", 00:16:27.435 "block_size": 512, 00:16:27.435 "num_blocks": 65536, 00:16:27.435 "uuid": "ea433f9d-dbae-4006-bda0-d6fa0e69f94a", 00:16:27.435 "assigned_rate_limits": { 00:16:27.435 "rw_ios_per_sec": 0, 00:16:27.435 "rw_mbytes_per_sec": 0, 00:16:27.435 "r_mbytes_per_sec": 0, 00:16:27.435 "w_mbytes_per_sec": 0 00:16:27.435 }, 00:16:27.435 "claimed": true, 00:16:27.435 "claim_type": "exclusive_write", 00:16:27.435 "zoned": false, 00:16:27.435 "supported_io_types": { 00:16:27.435 "read": true, 00:16:27.435 "write": true, 00:16:27.435 "unmap": true, 00:16:27.435 "flush": true, 00:16:27.435 "reset": true, 00:16:27.435 "nvme_admin": false, 00:16:27.435 "nvme_io": false, 00:16:27.435 "nvme_io_md": false, 00:16:27.435 "write_zeroes": true, 00:16:27.435 "zcopy": true, 00:16:27.435 "get_zone_info": false, 00:16:27.435 "zone_management": false, 00:16:27.435 "zone_append": false, 00:16:27.435 "compare": false, 00:16:27.435 "compare_and_write": false, 00:16:27.435 "abort": true, 00:16:27.435 "seek_hole": false, 00:16:27.435 "seek_data": false, 00:16:27.435 "copy": true, 00:16:27.435 "nvme_iov_md": false 00:16:27.435 }, 00:16:27.435 "memory_domains": [ 00:16:27.435 { 00:16:27.435 "dma_device_id": "system", 00:16:27.435 "dma_device_type": 1 00:16:27.435 }, 00:16:27.435 { 00:16:27.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.435 "dma_device_type": 2 00:16:27.435 } 00:16:27.435 ], 00:16:27.435 "driver_specific": {} 00:16:27.435 } 00:16:27.435 ] 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.435 "name": "Existed_Raid", 00:16:27.435 "uuid": "e035e70b-2fc8-4980-8984-6bfe7c417026", 00:16:27.435 "strip_size_kb": 64, 00:16:27.435 "state": "online", 00:16:27.435 "raid_level": "raid5f", 00:16:27.435 "superblock": true, 00:16:27.435 "num_base_bdevs": 4, 00:16:27.435 "num_base_bdevs_discovered": 4, 00:16:27.435 "num_base_bdevs_operational": 4, 00:16:27.435 "base_bdevs_list": [ 00:16:27.435 { 00:16:27.435 "name": "BaseBdev1", 00:16:27.435 "uuid": "d312b44e-927d-4929-9c8c-3da95ae591c0", 00:16:27.435 "is_configured": true, 00:16:27.435 "data_offset": 2048, 00:16:27.435 "data_size": 63488 00:16:27.435 }, 00:16:27.435 { 00:16:27.435 "name": "BaseBdev2", 00:16:27.435 "uuid": "bdfce4f1-ed15-4c70-9e7a-73a9edc941fc", 00:16:27.435 "is_configured": true, 00:16:27.435 "data_offset": 2048, 00:16:27.435 "data_size": 63488 00:16:27.435 }, 00:16:27.435 { 00:16:27.435 "name": "BaseBdev3", 00:16:27.435 "uuid": "effd0932-d1a5-4b99-a7e0-fc0d3dab3a93", 00:16:27.435 "is_configured": true, 00:16:27.435 "data_offset": 2048, 00:16:27.435 "data_size": 63488 00:16:27.435 }, 00:16:27.435 { 00:16:27.435 "name": "BaseBdev4", 00:16:27.435 "uuid": "ea433f9d-dbae-4006-bda0-d6fa0e69f94a", 00:16:27.435 "is_configured": true, 00:16:27.435 "data_offset": 2048, 00:16:27.435 "data_size": 63488 00:16:27.435 } 00:16:27.435 ] 00:16:27.435 }' 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.435 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.695 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:27.695 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:27.695 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:27.695 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:27.695 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:27.695 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:27.695 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:27.695 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:27.695 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.695 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.695 [2024-10-15 09:15:45.584135] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:27.955 "name": "Existed_Raid", 00:16:27.955 "aliases": [ 00:16:27.955 "e035e70b-2fc8-4980-8984-6bfe7c417026" 00:16:27.955 ], 00:16:27.955 "product_name": "Raid Volume", 00:16:27.955 "block_size": 512, 00:16:27.955 "num_blocks": 190464, 00:16:27.955 "uuid": "e035e70b-2fc8-4980-8984-6bfe7c417026", 00:16:27.955 "assigned_rate_limits": { 00:16:27.955 "rw_ios_per_sec": 0, 00:16:27.955 "rw_mbytes_per_sec": 0, 00:16:27.955 "r_mbytes_per_sec": 0, 00:16:27.955 "w_mbytes_per_sec": 0 00:16:27.955 }, 00:16:27.955 "claimed": false, 00:16:27.955 "zoned": false, 00:16:27.955 "supported_io_types": { 00:16:27.955 "read": true, 00:16:27.955 "write": true, 00:16:27.955 "unmap": false, 00:16:27.955 "flush": false, 00:16:27.955 "reset": true, 00:16:27.955 "nvme_admin": false, 00:16:27.955 "nvme_io": false, 00:16:27.955 "nvme_io_md": false, 00:16:27.955 "write_zeroes": true, 00:16:27.955 "zcopy": false, 00:16:27.955 "get_zone_info": false, 00:16:27.955 "zone_management": false, 00:16:27.955 "zone_append": false, 00:16:27.955 "compare": false, 00:16:27.955 "compare_and_write": false, 00:16:27.955 "abort": false, 00:16:27.955 "seek_hole": false, 00:16:27.955 "seek_data": false, 00:16:27.955 "copy": false, 00:16:27.955 "nvme_iov_md": false 00:16:27.955 }, 00:16:27.955 "driver_specific": { 00:16:27.955 "raid": { 00:16:27.955 "uuid": "e035e70b-2fc8-4980-8984-6bfe7c417026", 00:16:27.955 "strip_size_kb": 64, 00:16:27.955 "state": "online", 00:16:27.955 "raid_level": "raid5f", 00:16:27.955 "superblock": true, 00:16:27.955 "num_base_bdevs": 4, 00:16:27.955 "num_base_bdevs_discovered": 4, 00:16:27.955 "num_base_bdevs_operational": 4, 00:16:27.955 "base_bdevs_list": [ 00:16:27.955 { 00:16:27.955 "name": "BaseBdev1", 00:16:27.955 "uuid": "d312b44e-927d-4929-9c8c-3da95ae591c0", 00:16:27.955 "is_configured": true, 00:16:27.955 "data_offset": 2048, 00:16:27.955 "data_size": 63488 00:16:27.955 }, 00:16:27.955 { 00:16:27.955 "name": "BaseBdev2", 00:16:27.955 "uuid": "bdfce4f1-ed15-4c70-9e7a-73a9edc941fc", 00:16:27.955 "is_configured": true, 00:16:27.955 "data_offset": 2048, 00:16:27.955 "data_size": 63488 00:16:27.955 }, 00:16:27.955 { 00:16:27.955 "name": "BaseBdev3", 00:16:27.955 "uuid": "effd0932-d1a5-4b99-a7e0-fc0d3dab3a93", 00:16:27.955 "is_configured": true, 00:16:27.955 "data_offset": 2048, 00:16:27.955 "data_size": 63488 00:16:27.955 }, 00:16:27.955 { 00:16:27.955 "name": "BaseBdev4", 00:16:27.955 "uuid": "ea433f9d-dbae-4006-bda0-d6fa0e69f94a", 00:16:27.955 "is_configured": true, 00:16:27.955 "data_offset": 2048, 00:16:27.955 "data_size": 63488 00:16:27.955 } 00:16:27.955 ] 00:16:27.955 } 00:16:27.955 } 00:16:27.955 }' 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:27.955 BaseBdev2 00:16:27.955 BaseBdev3 00:16:27.955 BaseBdev4' 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.955 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.215 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.215 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:28.215 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:28.215 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:28.215 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.215 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.215 [2024-10-15 09:15:45.899296] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:28.215 09:15:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.215 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:28.215 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:28.215 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:28.215 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:28.215 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:28.215 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:28.215 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.215 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.215 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.215 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.215 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:28.215 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.215 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.215 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.215 09:15:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.215 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.215 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.215 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.215 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.215 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.215 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.215 "name": "Existed_Raid", 00:16:28.215 "uuid": "e035e70b-2fc8-4980-8984-6bfe7c417026", 00:16:28.215 "strip_size_kb": 64, 00:16:28.215 "state": "online", 00:16:28.215 "raid_level": "raid5f", 00:16:28.215 "superblock": true, 00:16:28.215 "num_base_bdevs": 4, 00:16:28.215 "num_base_bdevs_discovered": 3, 00:16:28.215 "num_base_bdevs_operational": 3, 00:16:28.215 "base_bdevs_list": [ 00:16:28.215 { 00:16:28.215 "name": null, 00:16:28.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.215 "is_configured": false, 00:16:28.215 "data_offset": 0, 00:16:28.215 "data_size": 63488 00:16:28.215 }, 00:16:28.215 { 00:16:28.215 "name": "BaseBdev2", 00:16:28.215 "uuid": "bdfce4f1-ed15-4c70-9e7a-73a9edc941fc", 00:16:28.215 "is_configured": true, 00:16:28.215 "data_offset": 2048, 00:16:28.215 "data_size": 63488 00:16:28.215 }, 00:16:28.215 { 00:16:28.215 "name": "BaseBdev3", 00:16:28.215 "uuid": "effd0932-d1a5-4b99-a7e0-fc0d3dab3a93", 00:16:28.215 "is_configured": true, 00:16:28.215 "data_offset": 2048, 00:16:28.215 "data_size": 63488 00:16:28.215 }, 00:16:28.215 { 00:16:28.215 "name": "BaseBdev4", 00:16:28.215 "uuid": "ea433f9d-dbae-4006-bda0-d6fa0e69f94a", 00:16:28.215 "is_configured": true, 00:16:28.215 "data_offset": 2048, 00:16:28.215 "data_size": 63488 00:16:28.215 } 00:16:28.215 ] 00:16:28.215 }' 00:16:28.215 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.215 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.784 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:28.784 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:28.784 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.784 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.784 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:28.784 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.784 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.784 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:28.784 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:28.784 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:28.784 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.784 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.784 [2024-10-15 09:15:46.491821] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:28.784 [2024-10-15 09:15:46.492063] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:28.784 [2024-10-15 09:15:46.588808] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:28.784 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.784 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:28.784 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:28.784 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:28.784 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.784 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.784 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.784 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.784 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:28.784 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:28.784 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:28.784 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.784 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.784 [2024-10-15 09:15:46.636769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:29.044 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.044 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:29.044 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:29.044 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.044 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:29.044 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.044 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.044 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.044 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:29.044 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:29.044 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:29.044 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.044 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.044 [2024-10-15 09:15:46.786463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:29.044 [2024-10-15 09:15:46.786590] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:29.044 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.044 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:29.044 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:29.044 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.044 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.044 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.044 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:29.044 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.304 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:29.304 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:29.304 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:29.304 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:29.304 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:29.304 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:29.304 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.304 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.304 BaseBdev2 00:16:29.304 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.304 09:15:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:29.304 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:29.304 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:29.304 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:29.304 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:29.304 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:29.304 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:29.304 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.304 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.304 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.304 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:29.304 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.304 09:15:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.304 [ 00:16:29.304 { 00:16:29.304 "name": "BaseBdev2", 00:16:29.304 "aliases": [ 00:16:29.304 "62919762-131c-4b02-ae4d-055bb6de9e36" 00:16:29.304 ], 00:16:29.304 "product_name": "Malloc disk", 00:16:29.304 "block_size": 512, 00:16:29.304 "num_blocks": 65536, 00:16:29.304 "uuid": "62919762-131c-4b02-ae4d-055bb6de9e36", 00:16:29.304 "assigned_rate_limits": { 00:16:29.304 "rw_ios_per_sec": 0, 00:16:29.304 "rw_mbytes_per_sec": 0, 00:16:29.304 "r_mbytes_per_sec": 0, 00:16:29.304 "w_mbytes_per_sec": 0 00:16:29.304 }, 00:16:29.304 "claimed": false, 00:16:29.304 "zoned": false, 00:16:29.304 "supported_io_types": { 00:16:29.304 "read": true, 00:16:29.304 "write": true, 00:16:29.304 "unmap": true, 00:16:29.304 "flush": true, 00:16:29.304 "reset": true, 00:16:29.304 "nvme_admin": false, 00:16:29.304 "nvme_io": false, 00:16:29.304 "nvme_io_md": false, 00:16:29.304 "write_zeroes": true, 00:16:29.304 "zcopy": true, 00:16:29.304 "get_zone_info": false, 00:16:29.304 "zone_management": false, 00:16:29.304 "zone_append": false, 00:16:29.304 "compare": false, 00:16:29.304 "compare_and_write": false, 00:16:29.304 "abort": true, 00:16:29.304 "seek_hole": false, 00:16:29.304 "seek_data": false, 00:16:29.304 "copy": true, 00:16:29.304 "nvme_iov_md": false 00:16:29.304 }, 00:16:29.304 "memory_domains": [ 00:16:29.304 { 00:16:29.304 "dma_device_id": "system", 00:16:29.304 "dma_device_type": 1 00:16:29.304 }, 00:16:29.304 { 00:16:29.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.304 "dma_device_type": 2 00:16:29.304 } 00:16:29.304 ], 00:16:29.304 "driver_specific": {} 00:16:29.304 } 00:16:29.304 ] 00:16:29.304 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.304 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:29.304 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:29.304 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:29.304 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:29.304 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.304 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.304 BaseBdev3 00:16:29.304 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.304 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:29.304 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:29.304 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:29.304 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:29.304 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:29.304 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:29.304 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:29.304 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.304 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.304 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.304 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:29.304 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.304 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.304 [ 00:16:29.304 { 00:16:29.304 "name": "BaseBdev3", 00:16:29.304 "aliases": [ 00:16:29.304 "507d80a4-fd95-4a81-b698-fa9dc15f713e" 00:16:29.304 ], 00:16:29.304 "product_name": "Malloc disk", 00:16:29.304 "block_size": 512, 00:16:29.304 "num_blocks": 65536, 00:16:29.304 "uuid": "507d80a4-fd95-4a81-b698-fa9dc15f713e", 00:16:29.304 "assigned_rate_limits": { 00:16:29.305 "rw_ios_per_sec": 0, 00:16:29.305 "rw_mbytes_per_sec": 0, 00:16:29.305 "r_mbytes_per_sec": 0, 00:16:29.305 "w_mbytes_per_sec": 0 00:16:29.305 }, 00:16:29.305 "claimed": false, 00:16:29.305 "zoned": false, 00:16:29.305 "supported_io_types": { 00:16:29.305 "read": true, 00:16:29.305 "write": true, 00:16:29.305 "unmap": true, 00:16:29.305 "flush": true, 00:16:29.305 "reset": true, 00:16:29.305 "nvme_admin": false, 00:16:29.305 "nvme_io": false, 00:16:29.305 "nvme_io_md": false, 00:16:29.305 "write_zeroes": true, 00:16:29.305 "zcopy": true, 00:16:29.305 "get_zone_info": false, 00:16:29.305 "zone_management": false, 00:16:29.305 "zone_append": false, 00:16:29.305 "compare": false, 00:16:29.305 "compare_and_write": false, 00:16:29.305 "abort": true, 00:16:29.305 "seek_hole": false, 00:16:29.305 "seek_data": false, 00:16:29.305 "copy": true, 00:16:29.305 "nvme_iov_md": false 00:16:29.305 }, 00:16:29.305 "memory_domains": [ 00:16:29.305 { 00:16:29.305 "dma_device_id": "system", 00:16:29.305 "dma_device_type": 1 00:16:29.305 }, 00:16:29.305 { 00:16:29.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.305 "dma_device_type": 2 00:16:29.305 } 00:16:29.305 ], 00:16:29.305 "driver_specific": {} 00:16:29.305 } 00:16:29.305 ] 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.305 BaseBdev4 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.305 [ 00:16:29.305 { 00:16:29.305 "name": "BaseBdev4", 00:16:29.305 "aliases": [ 00:16:29.305 "e02ae930-d965-47c6-b3b2-6f6568ef7daf" 00:16:29.305 ], 00:16:29.305 "product_name": "Malloc disk", 00:16:29.305 "block_size": 512, 00:16:29.305 "num_blocks": 65536, 00:16:29.305 "uuid": "e02ae930-d965-47c6-b3b2-6f6568ef7daf", 00:16:29.305 "assigned_rate_limits": { 00:16:29.305 "rw_ios_per_sec": 0, 00:16:29.305 "rw_mbytes_per_sec": 0, 00:16:29.305 "r_mbytes_per_sec": 0, 00:16:29.305 "w_mbytes_per_sec": 0 00:16:29.305 }, 00:16:29.305 "claimed": false, 00:16:29.305 "zoned": false, 00:16:29.305 "supported_io_types": { 00:16:29.305 "read": true, 00:16:29.305 "write": true, 00:16:29.305 "unmap": true, 00:16:29.305 "flush": true, 00:16:29.305 "reset": true, 00:16:29.305 "nvme_admin": false, 00:16:29.305 "nvme_io": false, 00:16:29.305 "nvme_io_md": false, 00:16:29.305 "write_zeroes": true, 00:16:29.305 "zcopy": true, 00:16:29.305 "get_zone_info": false, 00:16:29.305 "zone_management": false, 00:16:29.305 "zone_append": false, 00:16:29.305 "compare": false, 00:16:29.305 "compare_and_write": false, 00:16:29.305 "abort": true, 00:16:29.305 "seek_hole": false, 00:16:29.305 "seek_data": false, 00:16:29.305 "copy": true, 00:16:29.305 "nvme_iov_md": false 00:16:29.305 }, 00:16:29.305 "memory_domains": [ 00:16:29.305 { 00:16:29.305 "dma_device_id": "system", 00:16:29.305 "dma_device_type": 1 00:16:29.305 }, 00:16:29.305 { 00:16:29.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.305 "dma_device_type": 2 00:16:29.305 } 00:16:29.305 ], 00:16:29.305 "driver_specific": {} 00:16:29.305 } 00:16:29.305 ] 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.305 [2024-10-15 09:15:47.188180] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:29.305 [2024-10-15 09:15:47.188230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:29.305 [2024-10-15 09:15:47.188256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:29.305 [2024-10-15 09:15:47.190314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:29.305 [2024-10-15 09:15:47.190374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.305 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.565 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.565 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.565 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.565 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.565 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.565 "name": "Existed_Raid", 00:16:29.565 "uuid": "cfc89148-f8bd-45c6-99f9-56a308d66d13", 00:16:29.565 "strip_size_kb": 64, 00:16:29.565 "state": "configuring", 00:16:29.565 "raid_level": "raid5f", 00:16:29.565 "superblock": true, 00:16:29.565 "num_base_bdevs": 4, 00:16:29.565 "num_base_bdevs_discovered": 3, 00:16:29.565 "num_base_bdevs_operational": 4, 00:16:29.565 "base_bdevs_list": [ 00:16:29.565 { 00:16:29.565 "name": "BaseBdev1", 00:16:29.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.565 "is_configured": false, 00:16:29.565 "data_offset": 0, 00:16:29.565 "data_size": 0 00:16:29.565 }, 00:16:29.565 { 00:16:29.565 "name": "BaseBdev2", 00:16:29.565 "uuid": "62919762-131c-4b02-ae4d-055bb6de9e36", 00:16:29.565 "is_configured": true, 00:16:29.565 "data_offset": 2048, 00:16:29.565 "data_size": 63488 00:16:29.565 }, 00:16:29.565 { 00:16:29.565 "name": "BaseBdev3", 00:16:29.565 "uuid": "507d80a4-fd95-4a81-b698-fa9dc15f713e", 00:16:29.565 "is_configured": true, 00:16:29.565 "data_offset": 2048, 00:16:29.565 "data_size": 63488 00:16:29.565 }, 00:16:29.565 { 00:16:29.565 "name": "BaseBdev4", 00:16:29.565 "uuid": "e02ae930-d965-47c6-b3b2-6f6568ef7daf", 00:16:29.565 "is_configured": true, 00:16:29.565 "data_offset": 2048, 00:16:29.565 "data_size": 63488 00:16:29.565 } 00:16:29.565 ] 00:16:29.565 }' 00:16:29.565 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.565 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.828 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:29.828 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.828 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.828 [2024-10-15 09:15:47.655425] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:29.828 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.828 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:29.828 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.828 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.828 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.828 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.828 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:29.828 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.828 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.828 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.828 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.828 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.828 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.828 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.828 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.828 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.828 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.828 "name": "Existed_Raid", 00:16:29.828 "uuid": "cfc89148-f8bd-45c6-99f9-56a308d66d13", 00:16:29.828 "strip_size_kb": 64, 00:16:29.828 "state": "configuring", 00:16:29.828 "raid_level": "raid5f", 00:16:29.828 "superblock": true, 00:16:29.828 "num_base_bdevs": 4, 00:16:29.828 "num_base_bdevs_discovered": 2, 00:16:29.828 "num_base_bdevs_operational": 4, 00:16:29.828 "base_bdevs_list": [ 00:16:29.828 { 00:16:29.828 "name": "BaseBdev1", 00:16:29.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.828 "is_configured": false, 00:16:29.828 "data_offset": 0, 00:16:29.828 "data_size": 0 00:16:29.828 }, 00:16:29.828 { 00:16:29.828 "name": null, 00:16:29.828 "uuid": "62919762-131c-4b02-ae4d-055bb6de9e36", 00:16:29.828 "is_configured": false, 00:16:29.828 "data_offset": 0, 00:16:29.828 "data_size": 63488 00:16:29.828 }, 00:16:29.828 { 00:16:29.828 "name": "BaseBdev3", 00:16:29.828 "uuid": "507d80a4-fd95-4a81-b698-fa9dc15f713e", 00:16:29.828 "is_configured": true, 00:16:29.828 "data_offset": 2048, 00:16:29.828 "data_size": 63488 00:16:29.828 }, 00:16:29.828 { 00:16:29.828 "name": "BaseBdev4", 00:16:29.828 "uuid": "e02ae930-d965-47c6-b3b2-6f6568ef7daf", 00:16:29.828 "is_configured": true, 00:16:29.828 "data_offset": 2048, 00:16:29.828 "data_size": 63488 00:16:29.828 } 00:16:29.828 ] 00:16:29.828 }' 00:16:29.828 09:15:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.828 09:15:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.395 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:30.395 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.395 09:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.395 09:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.395 09:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.395 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:30.395 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:30.395 09:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.395 09:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.395 [2024-10-15 09:15:48.202947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:30.395 BaseBdev1 00:16:30.395 09:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.395 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:30.395 09:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:30.395 09:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:30.395 09:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:30.395 09:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:30.395 09:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:30.395 09:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:30.395 09:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.395 09:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.395 09:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.395 09:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:30.395 09:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.395 09:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.395 [ 00:16:30.395 { 00:16:30.395 "name": "BaseBdev1", 00:16:30.395 "aliases": [ 00:16:30.395 "42eb71a7-0d1a-4096-bc47-d875f5e93c75" 00:16:30.395 ], 00:16:30.395 "product_name": "Malloc disk", 00:16:30.395 "block_size": 512, 00:16:30.395 "num_blocks": 65536, 00:16:30.395 "uuid": "42eb71a7-0d1a-4096-bc47-d875f5e93c75", 00:16:30.395 "assigned_rate_limits": { 00:16:30.395 "rw_ios_per_sec": 0, 00:16:30.395 "rw_mbytes_per_sec": 0, 00:16:30.395 "r_mbytes_per_sec": 0, 00:16:30.395 "w_mbytes_per_sec": 0 00:16:30.395 }, 00:16:30.395 "claimed": true, 00:16:30.395 "claim_type": "exclusive_write", 00:16:30.395 "zoned": false, 00:16:30.395 "supported_io_types": { 00:16:30.395 "read": true, 00:16:30.395 "write": true, 00:16:30.395 "unmap": true, 00:16:30.395 "flush": true, 00:16:30.395 "reset": true, 00:16:30.395 "nvme_admin": false, 00:16:30.395 "nvme_io": false, 00:16:30.395 "nvme_io_md": false, 00:16:30.395 "write_zeroes": true, 00:16:30.395 "zcopy": true, 00:16:30.395 "get_zone_info": false, 00:16:30.395 "zone_management": false, 00:16:30.395 "zone_append": false, 00:16:30.395 "compare": false, 00:16:30.395 "compare_and_write": false, 00:16:30.395 "abort": true, 00:16:30.395 "seek_hole": false, 00:16:30.395 "seek_data": false, 00:16:30.395 "copy": true, 00:16:30.395 "nvme_iov_md": false 00:16:30.395 }, 00:16:30.395 "memory_domains": [ 00:16:30.395 { 00:16:30.395 "dma_device_id": "system", 00:16:30.395 "dma_device_type": 1 00:16:30.395 }, 00:16:30.395 { 00:16:30.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.395 "dma_device_type": 2 00:16:30.395 } 00:16:30.395 ], 00:16:30.395 "driver_specific": {} 00:16:30.395 } 00:16:30.395 ] 00:16:30.395 09:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.395 09:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:30.395 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:30.395 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.395 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.395 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.396 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.396 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:30.396 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.396 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.396 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.396 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.396 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.396 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.396 09:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.396 09:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.396 09:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.654 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.654 "name": "Existed_Raid", 00:16:30.654 "uuid": "cfc89148-f8bd-45c6-99f9-56a308d66d13", 00:16:30.654 "strip_size_kb": 64, 00:16:30.654 "state": "configuring", 00:16:30.654 "raid_level": "raid5f", 00:16:30.654 "superblock": true, 00:16:30.654 "num_base_bdevs": 4, 00:16:30.654 "num_base_bdevs_discovered": 3, 00:16:30.654 "num_base_bdevs_operational": 4, 00:16:30.654 "base_bdevs_list": [ 00:16:30.654 { 00:16:30.654 "name": "BaseBdev1", 00:16:30.654 "uuid": "42eb71a7-0d1a-4096-bc47-d875f5e93c75", 00:16:30.654 "is_configured": true, 00:16:30.654 "data_offset": 2048, 00:16:30.654 "data_size": 63488 00:16:30.654 }, 00:16:30.654 { 00:16:30.654 "name": null, 00:16:30.654 "uuid": "62919762-131c-4b02-ae4d-055bb6de9e36", 00:16:30.654 "is_configured": false, 00:16:30.655 "data_offset": 0, 00:16:30.655 "data_size": 63488 00:16:30.655 }, 00:16:30.655 { 00:16:30.655 "name": "BaseBdev3", 00:16:30.655 "uuid": "507d80a4-fd95-4a81-b698-fa9dc15f713e", 00:16:30.655 "is_configured": true, 00:16:30.655 "data_offset": 2048, 00:16:30.655 "data_size": 63488 00:16:30.655 }, 00:16:30.655 { 00:16:30.655 "name": "BaseBdev4", 00:16:30.655 "uuid": "e02ae930-d965-47c6-b3b2-6f6568ef7daf", 00:16:30.655 "is_configured": true, 00:16:30.655 "data_offset": 2048, 00:16:30.655 "data_size": 63488 00:16:30.655 } 00:16:30.655 ] 00:16:30.655 }' 00:16:30.655 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.655 09:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.914 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:30.914 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.914 09:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.914 09:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.914 09:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.914 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:30.914 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:30.914 09:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.914 09:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.914 [2024-10-15 09:15:48.738172] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:30.914 09:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.914 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:30.914 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.914 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.914 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.914 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.914 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:30.914 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.914 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.914 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.914 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.914 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.914 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.914 09:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.914 09:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.914 09:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.914 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.914 "name": "Existed_Raid", 00:16:30.914 "uuid": "cfc89148-f8bd-45c6-99f9-56a308d66d13", 00:16:30.914 "strip_size_kb": 64, 00:16:30.914 "state": "configuring", 00:16:30.914 "raid_level": "raid5f", 00:16:30.914 "superblock": true, 00:16:30.914 "num_base_bdevs": 4, 00:16:30.914 "num_base_bdevs_discovered": 2, 00:16:30.914 "num_base_bdevs_operational": 4, 00:16:30.914 "base_bdevs_list": [ 00:16:30.914 { 00:16:30.914 "name": "BaseBdev1", 00:16:30.914 "uuid": "42eb71a7-0d1a-4096-bc47-d875f5e93c75", 00:16:30.914 "is_configured": true, 00:16:30.914 "data_offset": 2048, 00:16:30.914 "data_size": 63488 00:16:30.914 }, 00:16:30.914 { 00:16:30.914 "name": null, 00:16:30.914 "uuid": "62919762-131c-4b02-ae4d-055bb6de9e36", 00:16:30.914 "is_configured": false, 00:16:30.914 "data_offset": 0, 00:16:30.914 "data_size": 63488 00:16:30.914 }, 00:16:30.914 { 00:16:30.914 "name": null, 00:16:30.914 "uuid": "507d80a4-fd95-4a81-b698-fa9dc15f713e", 00:16:30.914 "is_configured": false, 00:16:30.914 "data_offset": 0, 00:16:30.914 "data_size": 63488 00:16:30.914 }, 00:16:30.914 { 00:16:30.915 "name": "BaseBdev4", 00:16:30.915 "uuid": "e02ae930-d965-47c6-b3b2-6f6568ef7daf", 00:16:30.915 "is_configured": true, 00:16:30.915 "data_offset": 2048, 00:16:30.915 "data_size": 63488 00:16:30.915 } 00:16:30.915 ] 00:16:30.915 }' 00:16:30.915 09:15:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.915 09:15:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.484 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:31.484 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.484 09:15:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.484 09:15:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.484 09:15:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.484 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:31.484 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:31.484 09:15:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.484 09:15:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.484 [2024-10-15 09:15:49.229463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:31.484 09:15:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.484 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:31.484 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.484 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.484 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.484 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.484 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:31.484 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.484 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.484 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.484 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.484 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.484 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.484 09:15:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.484 09:15:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.484 09:15:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.484 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.484 "name": "Existed_Raid", 00:16:31.484 "uuid": "cfc89148-f8bd-45c6-99f9-56a308d66d13", 00:16:31.484 "strip_size_kb": 64, 00:16:31.484 "state": "configuring", 00:16:31.484 "raid_level": "raid5f", 00:16:31.484 "superblock": true, 00:16:31.484 "num_base_bdevs": 4, 00:16:31.484 "num_base_bdevs_discovered": 3, 00:16:31.484 "num_base_bdevs_operational": 4, 00:16:31.484 "base_bdevs_list": [ 00:16:31.484 { 00:16:31.484 "name": "BaseBdev1", 00:16:31.484 "uuid": "42eb71a7-0d1a-4096-bc47-d875f5e93c75", 00:16:31.484 "is_configured": true, 00:16:31.484 "data_offset": 2048, 00:16:31.484 "data_size": 63488 00:16:31.484 }, 00:16:31.484 { 00:16:31.484 "name": null, 00:16:31.484 "uuid": "62919762-131c-4b02-ae4d-055bb6de9e36", 00:16:31.484 "is_configured": false, 00:16:31.484 "data_offset": 0, 00:16:31.484 "data_size": 63488 00:16:31.484 }, 00:16:31.484 { 00:16:31.484 "name": "BaseBdev3", 00:16:31.484 "uuid": "507d80a4-fd95-4a81-b698-fa9dc15f713e", 00:16:31.484 "is_configured": true, 00:16:31.484 "data_offset": 2048, 00:16:31.484 "data_size": 63488 00:16:31.484 }, 00:16:31.484 { 00:16:31.484 "name": "BaseBdev4", 00:16:31.484 "uuid": "e02ae930-d965-47c6-b3b2-6f6568ef7daf", 00:16:31.484 "is_configured": true, 00:16:31.484 "data_offset": 2048, 00:16:31.484 "data_size": 63488 00:16:31.484 } 00:16:31.484 ] 00:16:31.484 }' 00:16:31.484 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.485 09:15:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.053 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.053 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:32.053 09:15:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.053 09:15:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.053 09:15:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.053 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:32.053 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:32.053 09:15:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.053 09:15:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.053 [2024-10-15 09:15:49.728652] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:32.053 09:15:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.053 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:32.053 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.053 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:32.053 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.053 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.053 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:32.053 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.053 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.053 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.053 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.053 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.053 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.053 09:15:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.053 09:15:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.053 09:15:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.053 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.053 "name": "Existed_Raid", 00:16:32.054 "uuid": "cfc89148-f8bd-45c6-99f9-56a308d66d13", 00:16:32.054 "strip_size_kb": 64, 00:16:32.054 "state": "configuring", 00:16:32.054 "raid_level": "raid5f", 00:16:32.054 "superblock": true, 00:16:32.054 "num_base_bdevs": 4, 00:16:32.054 "num_base_bdevs_discovered": 2, 00:16:32.054 "num_base_bdevs_operational": 4, 00:16:32.054 "base_bdevs_list": [ 00:16:32.054 { 00:16:32.054 "name": null, 00:16:32.054 "uuid": "42eb71a7-0d1a-4096-bc47-d875f5e93c75", 00:16:32.054 "is_configured": false, 00:16:32.054 "data_offset": 0, 00:16:32.054 "data_size": 63488 00:16:32.054 }, 00:16:32.054 { 00:16:32.054 "name": null, 00:16:32.054 "uuid": "62919762-131c-4b02-ae4d-055bb6de9e36", 00:16:32.054 "is_configured": false, 00:16:32.054 "data_offset": 0, 00:16:32.054 "data_size": 63488 00:16:32.054 }, 00:16:32.054 { 00:16:32.054 "name": "BaseBdev3", 00:16:32.054 "uuid": "507d80a4-fd95-4a81-b698-fa9dc15f713e", 00:16:32.054 "is_configured": true, 00:16:32.054 "data_offset": 2048, 00:16:32.054 "data_size": 63488 00:16:32.054 }, 00:16:32.054 { 00:16:32.054 "name": "BaseBdev4", 00:16:32.054 "uuid": "e02ae930-d965-47c6-b3b2-6f6568ef7daf", 00:16:32.054 "is_configured": true, 00:16:32.054 "data_offset": 2048, 00:16:32.054 "data_size": 63488 00:16:32.054 } 00:16:32.054 ] 00:16:32.054 }' 00:16:32.054 09:15:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.054 09:15:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.622 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.622 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:32.622 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.622 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.622 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.622 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:32.622 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:32.622 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.622 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.622 [2024-10-15 09:15:50.341254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:32.622 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.622 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:32.622 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.622 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:32.622 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.622 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.622 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:32.622 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.622 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.622 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.622 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.622 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.622 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.622 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.622 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.622 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.622 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.622 "name": "Existed_Raid", 00:16:32.622 "uuid": "cfc89148-f8bd-45c6-99f9-56a308d66d13", 00:16:32.622 "strip_size_kb": 64, 00:16:32.622 "state": "configuring", 00:16:32.622 "raid_level": "raid5f", 00:16:32.622 "superblock": true, 00:16:32.622 "num_base_bdevs": 4, 00:16:32.622 "num_base_bdevs_discovered": 3, 00:16:32.622 "num_base_bdevs_operational": 4, 00:16:32.622 "base_bdevs_list": [ 00:16:32.622 { 00:16:32.622 "name": null, 00:16:32.622 "uuid": "42eb71a7-0d1a-4096-bc47-d875f5e93c75", 00:16:32.622 "is_configured": false, 00:16:32.622 "data_offset": 0, 00:16:32.622 "data_size": 63488 00:16:32.622 }, 00:16:32.622 { 00:16:32.622 "name": "BaseBdev2", 00:16:32.622 "uuid": "62919762-131c-4b02-ae4d-055bb6de9e36", 00:16:32.622 "is_configured": true, 00:16:32.622 "data_offset": 2048, 00:16:32.622 "data_size": 63488 00:16:32.622 }, 00:16:32.622 { 00:16:32.622 "name": "BaseBdev3", 00:16:32.622 "uuid": "507d80a4-fd95-4a81-b698-fa9dc15f713e", 00:16:32.622 "is_configured": true, 00:16:32.622 "data_offset": 2048, 00:16:32.622 "data_size": 63488 00:16:32.622 }, 00:16:32.622 { 00:16:32.622 "name": "BaseBdev4", 00:16:32.622 "uuid": "e02ae930-d965-47c6-b3b2-6f6568ef7daf", 00:16:32.622 "is_configured": true, 00:16:32.622 "data_offset": 2048, 00:16:32.622 "data_size": 63488 00:16:32.622 } 00:16:32.622 ] 00:16:32.622 }' 00:16:32.622 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.622 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.191 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:33.191 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.191 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.191 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.191 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.191 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 42eb71a7-0d1a-4096-bc47-d875f5e93c75 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.192 [2024-10-15 09:15:50.907665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:33.192 [2024-10-15 09:15:50.907969] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:33.192 [2024-10-15 09:15:50.907984] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:33.192 [2024-10-15 09:15:50.908274] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:33.192 NewBaseBdev 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.192 [2024-10-15 09:15:50.917213] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:33.192 [2024-10-15 09:15:50.917292] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:33.192 [2024-10-15 09:15:50.917638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.192 [ 00:16:33.192 { 00:16:33.192 "name": "NewBaseBdev", 00:16:33.192 "aliases": [ 00:16:33.192 "42eb71a7-0d1a-4096-bc47-d875f5e93c75" 00:16:33.192 ], 00:16:33.192 "product_name": "Malloc disk", 00:16:33.192 "block_size": 512, 00:16:33.192 "num_blocks": 65536, 00:16:33.192 "uuid": "42eb71a7-0d1a-4096-bc47-d875f5e93c75", 00:16:33.192 "assigned_rate_limits": { 00:16:33.192 "rw_ios_per_sec": 0, 00:16:33.192 "rw_mbytes_per_sec": 0, 00:16:33.192 "r_mbytes_per_sec": 0, 00:16:33.192 "w_mbytes_per_sec": 0 00:16:33.192 }, 00:16:33.192 "claimed": true, 00:16:33.192 "claim_type": "exclusive_write", 00:16:33.192 "zoned": false, 00:16:33.192 "supported_io_types": { 00:16:33.192 "read": true, 00:16:33.192 "write": true, 00:16:33.192 "unmap": true, 00:16:33.192 "flush": true, 00:16:33.192 "reset": true, 00:16:33.192 "nvme_admin": false, 00:16:33.192 "nvme_io": false, 00:16:33.192 "nvme_io_md": false, 00:16:33.192 "write_zeroes": true, 00:16:33.192 "zcopy": true, 00:16:33.192 "get_zone_info": false, 00:16:33.192 "zone_management": false, 00:16:33.192 "zone_append": false, 00:16:33.192 "compare": false, 00:16:33.192 "compare_and_write": false, 00:16:33.192 "abort": true, 00:16:33.192 "seek_hole": false, 00:16:33.192 "seek_data": false, 00:16:33.192 "copy": true, 00:16:33.192 "nvme_iov_md": false 00:16:33.192 }, 00:16:33.192 "memory_domains": [ 00:16:33.192 { 00:16:33.192 "dma_device_id": "system", 00:16:33.192 "dma_device_type": 1 00:16:33.192 }, 00:16:33.192 { 00:16:33.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.192 "dma_device_type": 2 00:16:33.192 } 00:16:33.192 ], 00:16:33.192 "driver_specific": {} 00:16:33.192 } 00:16:33.192 ] 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.192 09:15:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.192 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.192 "name": "Existed_Raid", 00:16:33.192 "uuid": "cfc89148-f8bd-45c6-99f9-56a308d66d13", 00:16:33.192 "strip_size_kb": 64, 00:16:33.192 "state": "online", 00:16:33.192 "raid_level": "raid5f", 00:16:33.192 "superblock": true, 00:16:33.192 "num_base_bdevs": 4, 00:16:33.192 "num_base_bdevs_discovered": 4, 00:16:33.192 "num_base_bdevs_operational": 4, 00:16:33.192 "base_bdevs_list": [ 00:16:33.192 { 00:16:33.192 "name": "NewBaseBdev", 00:16:33.192 "uuid": "42eb71a7-0d1a-4096-bc47-d875f5e93c75", 00:16:33.192 "is_configured": true, 00:16:33.192 "data_offset": 2048, 00:16:33.192 "data_size": 63488 00:16:33.192 }, 00:16:33.192 { 00:16:33.192 "name": "BaseBdev2", 00:16:33.192 "uuid": "62919762-131c-4b02-ae4d-055bb6de9e36", 00:16:33.192 "is_configured": true, 00:16:33.192 "data_offset": 2048, 00:16:33.192 "data_size": 63488 00:16:33.192 }, 00:16:33.192 { 00:16:33.192 "name": "BaseBdev3", 00:16:33.192 "uuid": "507d80a4-fd95-4a81-b698-fa9dc15f713e", 00:16:33.192 "is_configured": true, 00:16:33.192 "data_offset": 2048, 00:16:33.192 "data_size": 63488 00:16:33.192 }, 00:16:33.192 { 00:16:33.192 "name": "BaseBdev4", 00:16:33.192 "uuid": "e02ae930-d965-47c6-b3b2-6f6568ef7daf", 00:16:33.192 "is_configured": true, 00:16:33.192 "data_offset": 2048, 00:16:33.192 "data_size": 63488 00:16:33.192 } 00:16:33.192 ] 00:16:33.192 }' 00:16:33.192 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.192 09:15:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.762 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:33.762 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:33.762 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:33.762 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:33.762 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:33.762 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:33.762 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:33.762 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:33.762 09:15:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.762 09:15:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.762 [2024-10-15 09:15:51.475448] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:33.762 09:15:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.762 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:33.762 "name": "Existed_Raid", 00:16:33.762 "aliases": [ 00:16:33.762 "cfc89148-f8bd-45c6-99f9-56a308d66d13" 00:16:33.762 ], 00:16:33.762 "product_name": "Raid Volume", 00:16:33.762 "block_size": 512, 00:16:33.762 "num_blocks": 190464, 00:16:33.762 "uuid": "cfc89148-f8bd-45c6-99f9-56a308d66d13", 00:16:33.762 "assigned_rate_limits": { 00:16:33.762 "rw_ios_per_sec": 0, 00:16:33.762 "rw_mbytes_per_sec": 0, 00:16:33.762 "r_mbytes_per_sec": 0, 00:16:33.762 "w_mbytes_per_sec": 0 00:16:33.762 }, 00:16:33.762 "claimed": false, 00:16:33.762 "zoned": false, 00:16:33.762 "supported_io_types": { 00:16:33.762 "read": true, 00:16:33.762 "write": true, 00:16:33.762 "unmap": false, 00:16:33.762 "flush": false, 00:16:33.762 "reset": true, 00:16:33.762 "nvme_admin": false, 00:16:33.762 "nvme_io": false, 00:16:33.762 "nvme_io_md": false, 00:16:33.762 "write_zeroes": true, 00:16:33.762 "zcopy": false, 00:16:33.762 "get_zone_info": false, 00:16:33.762 "zone_management": false, 00:16:33.762 "zone_append": false, 00:16:33.762 "compare": false, 00:16:33.762 "compare_and_write": false, 00:16:33.762 "abort": false, 00:16:33.762 "seek_hole": false, 00:16:33.762 "seek_data": false, 00:16:33.762 "copy": false, 00:16:33.762 "nvme_iov_md": false 00:16:33.762 }, 00:16:33.762 "driver_specific": { 00:16:33.762 "raid": { 00:16:33.762 "uuid": "cfc89148-f8bd-45c6-99f9-56a308d66d13", 00:16:33.763 "strip_size_kb": 64, 00:16:33.763 "state": "online", 00:16:33.763 "raid_level": "raid5f", 00:16:33.763 "superblock": true, 00:16:33.763 "num_base_bdevs": 4, 00:16:33.763 "num_base_bdevs_discovered": 4, 00:16:33.763 "num_base_bdevs_operational": 4, 00:16:33.763 "base_bdevs_list": [ 00:16:33.763 { 00:16:33.763 "name": "NewBaseBdev", 00:16:33.763 "uuid": "42eb71a7-0d1a-4096-bc47-d875f5e93c75", 00:16:33.763 "is_configured": true, 00:16:33.763 "data_offset": 2048, 00:16:33.763 "data_size": 63488 00:16:33.763 }, 00:16:33.763 { 00:16:33.763 "name": "BaseBdev2", 00:16:33.763 "uuid": "62919762-131c-4b02-ae4d-055bb6de9e36", 00:16:33.763 "is_configured": true, 00:16:33.763 "data_offset": 2048, 00:16:33.763 "data_size": 63488 00:16:33.763 }, 00:16:33.763 { 00:16:33.763 "name": "BaseBdev3", 00:16:33.763 "uuid": "507d80a4-fd95-4a81-b698-fa9dc15f713e", 00:16:33.763 "is_configured": true, 00:16:33.763 "data_offset": 2048, 00:16:33.763 "data_size": 63488 00:16:33.763 }, 00:16:33.763 { 00:16:33.763 "name": "BaseBdev4", 00:16:33.763 "uuid": "e02ae930-d965-47c6-b3b2-6f6568ef7daf", 00:16:33.763 "is_configured": true, 00:16:33.763 "data_offset": 2048, 00:16:33.763 "data_size": 63488 00:16:33.763 } 00:16:33.763 ] 00:16:33.763 } 00:16:33.763 } 00:16:33.763 }' 00:16:33.763 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:33.763 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:33.763 BaseBdev2 00:16:33.763 BaseBdev3 00:16:33.763 BaseBdev4' 00:16:33.763 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.763 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:33.763 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.763 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:33.763 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.763 09:15:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.763 09:15:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.763 09:15:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.763 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:33.763 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:33.763 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.763 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:33.763 09:15:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.763 09:15:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.763 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:34.023 09:15:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.023 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:34.023 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:34.023 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:34.023 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:34.023 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:34.023 09:15:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.023 09:15:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.023 09:15:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.023 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:34.023 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:34.023 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:34.023 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:34.023 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:34.023 09:15:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.023 09:15:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.023 09:15:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.023 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:34.023 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:34.023 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:34.023 09:15:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.023 09:15:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.023 [2024-10-15 09:15:51.794661] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:34.023 [2024-10-15 09:15:51.794760] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:34.023 [2024-10-15 09:15:51.794862] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:34.023 [2024-10-15 09:15:51.795198] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:34.023 [2024-10-15 09:15:51.795212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:34.023 09:15:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.023 09:15:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83748 00:16:34.023 09:15:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 83748 ']' 00:16:34.023 09:15:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 83748 00:16:34.023 09:15:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:16:34.023 09:15:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:34.023 09:15:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83748 00:16:34.023 09:15:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:34.023 killing process with pid 83748 00:16:34.023 09:15:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:34.023 09:15:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83748' 00:16:34.023 09:15:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 83748 00:16:34.023 [2024-10-15 09:15:51.843473] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:34.023 09:15:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 83748 00:16:34.592 [2024-10-15 09:15:52.281824] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:35.531 09:15:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:35.531 00:16:35.531 real 0m11.822s 00:16:35.531 user 0m18.750s 00:16:35.531 sys 0m2.121s 00:16:35.531 09:15:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:35.531 09:15:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.531 ************************************ 00:16:35.531 END TEST raid5f_state_function_test_sb 00:16:35.531 ************************************ 00:16:35.791 09:15:53 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:16:35.791 09:15:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:35.791 09:15:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:35.791 09:15:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:35.791 ************************************ 00:16:35.791 START TEST raid5f_superblock_test 00:16:35.791 ************************************ 00:16:35.791 09:15:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:16:35.791 09:15:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:35.791 09:15:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:35.791 09:15:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:35.791 09:15:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:35.791 09:15:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:35.791 09:15:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:35.791 09:15:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:35.791 09:15:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:35.791 09:15:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:35.791 09:15:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:35.791 09:15:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:35.791 09:15:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:35.791 09:15:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:35.791 09:15:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:35.791 09:15:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:35.791 09:15:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:35.791 09:15:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84419 00:16:35.791 09:15:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:35.791 09:15:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84419 00:16:35.791 09:15:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 84419 ']' 00:16:35.791 09:15:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.791 09:15:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:35.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.791 09:15:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.791 09:15:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:35.791 09:15:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.791 [2024-10-15 09:15:53.563028] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:16:35.791 [2024-10-15 09:15:53.563150] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84419 ] 00:16:36.050 [2024-10-15 09:15:53.715665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.050 [2024-10-15 09:15:53.852052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.308 [2024-10-15 09:15:54.101521] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:36.308 [2024-10-15 09:15:54.101596] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:36.568 09:15:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:36.568 09:15:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:16:36.568 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:36.568 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:36.568 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:36.569 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:36.569 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:36.569 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:36.569 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:36.569 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:36.569 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:36.569 09:15:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.569 09:15:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.828 malloc1 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.828 [2024-10-15 09:15:54.483947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:36.828 [2024-10-15 09:15:54.484080] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.828 [2024-10-15 09:15:54.484145] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:36.828 [2024-10-15 09:15:54.484184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.828 [2024-10-15 09:15:54.486662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.828 [2024-10-15 09:15:54.486758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:36.828 pt1 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.828 malloc2 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.828 [2024-10-15 09:15:54.545714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:36.828 [2024-10-15 09:15:54.545826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.828 [2024-10-15 09:15:54.545887] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:36.828 [2024-10-15 09:15:54.545927] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.828 [2024-10-15 09:15:54.548376] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.828 [2024-10-15 09:15:54.548457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:36.828 pt2 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.828 malloc3 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.828 [2024-10-15 09:15:54.623762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:36.828 [2024-10-15 09:15:54.623872] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.828 [2024-10-15 09:15:54.623928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:36.828 [2024-10-15 09:15:54.623967] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.828 [2024-10-15 09:15:54.626386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.828 [2024-10-15 09:15:54.626470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:36.828 pt3 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.828 malloc4 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.828 [2024-10-15 09:15:54.687730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:36.828 [2024-10-15 09:15:54.687837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.828 [2024-10-15 09:15:54.687892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:36.828 [2024-10-15 09:15:54.687927] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.828 [2024-10-15 09:15:54.690362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.828 [2024-10-15 09:15:54.690443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:36.828 pt4 00:16:36.828 09:15:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.829 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:36.829 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:36.829 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:36.829 09:15:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.829 09:15:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.829 [2024-10-15 09:15:54.699791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:36.829 [2024-10-15 09:15:54.701903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:36.829 [2024-10-15 09:15:54.701976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:36.829 [2024-10-15 09:15:54.702049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:36.829 [2024-10-15 09:15:54.702276] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:36.829 [2024-10-15 09:15:54.702290] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:36.829 [2024-10-15 09:15:54.702585] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:36.829 [2024-10-15 09:15:54.711410] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:36.829 [2024-10-15 09:15:54.711486] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:36.829 [2024-10-15 09:15:54.711722] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.829 09:15:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.829 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:36.829 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.829 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.829 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.829 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.829 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:36.829 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.829 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.829 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.829 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.829 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.829 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.829 09:15:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.829 09:15:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.088 09:15:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.088 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.088 "name": "raid_bdev1", 00:16:37.088 "uuid": "4c878592-df13-4f19-83ce-102780b3a6ad", 00:16:37.088 "strip_size_kb": 64, 00:16:37.088 "state": "online", 00:16:37.088 "raid_level": "raid5f", 00:16:37.088 "superblock": true, 00:16:37.088 "num_base_bdevs": 4, 00:16:37.088 "num_base_bdevs_discovered": 4, 00:16:37.088 "num_base_bdevs_operational": 4, 00:16:37.088 "base_bdevs_list": [ 00:16:37.088 { 00:16:37.088 "name": "pt1", 00:16:37.088 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:37.088 "is_configured": true, 00:16:37.088 "data_offset": 2048, 00:16:37.088 "data_size": 63488 00:16:37.088 }, 00:16:37.088 { 00:16:37.088 "name": "pt2", 00:16:37.088 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:37.088 "is_configured": true, 00:16:37.088 "data_offset": 2048, 00:16:37.088 "data_size": 63488 00:16:37.088 }, 00:16:37.088 { 00:16:37.088 "name": "pt3", 00:16:37.088 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:37.088 "is_configured": true, 00:16:37.088 "data_offset": 2048, 00:16:37.088 "data_size": 63488 00:16:37.088 }, 00:16:37.088 { 00:16:37.088 "name": "pt4", 00:16:37.088 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:37.088 "is_configured": true, 00:16:37.088 "data_offset": 2048, 00:16:37.088 "data_size": 63488 00:16:37.088 } 00:16:37.088 ] 00:16:37.088 }' 00:16:37.088 09:15:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.088 09:15:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.377 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:37.377 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:37.377 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:37.377 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:37.377 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:37.377 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:37.377 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:37.377 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:37.377 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.377 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.377 [2024-10-15 09:15:55.188979] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:37.377 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.378 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:37.378 "name": "raid_bdev1", 00:16:37.378 "aliases": [ 00:16:37.378 "4c878592-df13-4f19-83ce-102780b3a6ad" 00:16:37.378 ], 00:16:37.378 "product_name": "Raid Volume", 00:16:37.378 "block_size": 512, 00:16:37.378 "num_blocks": 190464, 00:16:37.378 "uuid": "4c878592-df13-4f19-83ce-102780b3a6ad", 00:16:37.378 "assigned_rate_limits": { 00:16:37.378 "rw_ios_per_sec": 0, 00:16:37.378 "rw_mbytes_per_sec": 0, 00:16:37.378 "r_mbytes_per_sec": 0, 00:16:37.378 "w_mbytes_per_sec": 0 00:16:37.378 }, 00:16:37.378 "claimed": false, 00:16:37.378 "zoned": false, 00:16:37.378 "supported_io_types": { 00:16:37.378 "read": true, 00:16:37.378 "write": true, 00:16:37.378 "unmap": false, 00:16:37.378 "flush": false, 00:16:37.378 "reset": true, 00:16:37.378 "nvme_admin": false, 00:16:37.378 "nvme_io": false, 00:16:37.378 "nvme_io_md": false, 00:16:37.378 "write_zeroes": true, 00:16:37.378 "zcopy": false, 00:16:37.378 "get_zone_info": false, 00:16:37.378 "zone_management": false, 00:16:37.378 "zone_append": false, 00:16:37.378 "compare": false, 00:16:37.378 "compare_and_write": false, 00:16:37.378 "abort": false, 00:16:37.378 "seek_hole": false, 00:16:37.378 "seek_data": false, 00:16:37.378 "copy": false, 00:16:37.378 "nvme_iov_md": false 00:16:37.378 }, 00:16:37.378 "driver_specific": { 00:16:37.378 "raid": { 00:16:37.378 "uuid": "4c878592-df13-4f19-83ce-102780b3a6ad", 00:16:37.378 "strip_size_kb": 64, 00:16:37.378 "state": "online", 00:16:37.378 "raid_level": "raid5f", 00:16:37.378 "superblock": true, 00:16:37.378 "num_base_bdevs": 4, 00:16:37.378 "num_base_bdevs_discovered": 4, 00:16:37.378 "num_base_bdevs_operational": 4, 00:16:37.378 "base_bdevs_list": [ 00:16:37.378 { 00:16:37.378 "name": "pt1", 00:16:37.378 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:37.378 "is_configured": true, 00:16:37.378 "data_offset": 2048, 00:16:37.378 "data_size": 63488 00:16:37.378 }, 00:16:37.378 { 00:16:37.378 "name": "pt2", 00:16:37.378 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:37.378 "is_configured": true, 00:16:37.378 "data_offset": 2048, 00:16:37.378 "data_size": 63488 00:16:37.378 }, 00:16:37.378 { 00:16:37.378 "name": "pt3", 00:16:37.378 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:37.378 "is_configured": true, 00:16:37.378 "data_offset": 2048, 00:16:37.378 "data_size": 63488 00:16:37.378 }, 00:16:37.378 { 00:16:37.378 "name": "pt4", 00:16:37.378 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:37.378 "is_configured": true, 00:16:37.378 "data_offset": 2048, 00:16:37.378 "data_size": 63488 00:16:37.378 } 00:16:37.378 ] 00:16:37.378 } 00:16:37.378 } 00:16:37.378 }' 00:16:37.378 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:37.638 pt2 00:16:37.638 pt3 00:16:37.638 pt4' 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:37.638 [2024-10-15 09:15:55.508366] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:37.638 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.896 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4c878592-df13-4f19-83ce-102780b3a6ad 00:16:37.896 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4c878592-df13-4f19-83ce-102780b3a6ad ']' 00:16:37.896 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:37.896 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.896 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.896 [2024-10-15 09:15:55.556074] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:37.896 [2024-10-15 09:15:55.556105] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:37.896 [2024-10-15 09:15:55.556197] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:37.896 [2024-10-15 09:15:55.556294] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:37.896 [2024-10-15 09:15:55.556311] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:37.896 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.896 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:37.896 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.896 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.896 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.896 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.896 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:37.896 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:37.896 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:37.896 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:37.896 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.896 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.896 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.896 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:37.896 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:37.896 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.897 [2024-10-15 09:15:55.727824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:37.897 [2024-10-15 09:15:55.729987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:37.897 [2024-10-15 09:15:55.730043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:37.897 [2024-10-15 09:15:55.730083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:37.897 [2024-10-15 09:15:55.730137] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:37.897 [2024-10-15 09:15:55.730193] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:37.897 [2024-10-15 09:15:55.730216] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:37.897 [2024-10-15 09:15:55.730238] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:37.897 [2024-10-15 09:15:55.730254] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:37.897 [2024-10-15 09:15:55.730267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:37.897 request: 00:16:37.897 { 00:16:37.897 "name": "raid_bdev1", 00:16:37.897 "raid_level": "raid5f", 00:16:37.897 "base_bdevs": [ 00:16:37.897 "malloc1", 00:16:37.897 "malloc2", 00:16:37.897 "malloc3", 00:16:37.897 "malloc4" 00:16:37.897 ], 00:16:37.897 "strip_size_kb": 64, 00:16:37.897 "superblock": false, 00:16:37.897 "method": "bdev_raid_create", 00:16:37.897 "req_id": 1 00:16:37.897 } 00:16:37.897 Got JSON-RPC error response 00:16:37.897 response: 00:16:37.897 { 00:16:37.897 "code": -17, 00:16:37.897 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:37.897 } 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.897 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.897 [2024-10-15 09:15:55.791670] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:37.897 [2024-10-15 09:15:55.791788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.897 [2024-10-15 09:15:55.791829] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:37.897 [2024-10-15 09:15:55.791871] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.156 [2024-10-15 09:15:55.794374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.156 [2024-10-15 09:15:55.794462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:38.156 [2024-10-15 09:15:55.794582] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:38.156 [2024-10-15 09:15:55.794701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:38.156 pt1 00:16:38.156 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.156 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:38.156 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.156 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:38.156 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.156 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.156 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:38.156 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.156 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.156 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.156 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.156 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.156 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.156 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.156 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.156 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.156 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.156 "name": "raid_bdev1", 00:16:38.156 "uuid": "4c878592-df13-4f19-83ce-102780b3a6ad", 00:16:38.156 "strip_size_kb": 64, 00:16:38.156 "state": "configuring", 00:16:38.156 "raid_level": "raid5f", 00:16:38.156 "superblock": true, 00:16:38.156 "num_base_bdevs": 4, 00:16:38.156 "num_base_bdevs_discovered": 1, 00:16:38.156 "num_base_bdevs_operational": 4, 00:16:38.156 "base_bdevs_list": [ 00:16:38.156 { 00:16:38.156 "name": "pt1", 00:16:38.156 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:38.156 "is_configured": true, 00:16:38.156 "data_offset": 2048, 00:16:38.156 "data_size": 63488 00:16:38.156 }, 00:16:38.156 { 00:16:38.156 "name": null, 00:16:38.156 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:38.156 "is_configured": false, 00:16:38.156 "data_offset": 2048, 00:16:38.156 "data_size": 63488 00:16:38.156 }, 00:16:38.156 { 00:16:38.156 "name": null, 00:16:38.156 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:38.156 "is_configured": false, 00:16:38.156 "data_offset": 2048, 00:16:38.156 "data_size": 63488 00:16:38.156 }, 00:16:38.156 { 00:16:38.156 "name": null, 00:16:38.156 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:38.156 "is_configured": false, 00:16:38.156 "data_offset": 2048, 00:16:38.156 "data_size": 63488 00:16:38.156 } 00:16:38.156 ] 00:16:38.156 }' 00:16:38.156 09:15:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.156 09:15:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.416 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:38.416 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:38.416 09:15:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.416 09:15:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.416 [2024-10-15 09:15:56.262896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:38.416 [2024-10-15 09:15:56.263042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.416 [2024-10-15 09:15:56.263071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:38.416 [2024-10-15 09:15:56.263085] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.416 [2024-10-15 09:15:56.263615] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.416 [2024-10-15 09:15:56.263639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:38.416 [2024-10-15 09:15:56.263744] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:38.416 [2024-10-15 09:15:56.263775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:38.416 pt2 00:16:38.416 09:15:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.416 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:38.416 09:15:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.416 09:15:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.416 [2024-10-15 09:15:56.274890] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:38.416 09:15:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.416 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:38.416 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.416 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:38.416 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.416 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.416 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:38.416 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.416 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.416 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.416 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.416 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.416 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.416 09:15:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.416 09:15:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.416 09:15:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.675 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.675 "name": "raid_bdev1", 00:16:38.675 "uuid": "4c878592-df13-4f19-83ce-102780b3a6ad", 00:16:38.675 "strip_size_kb": 64, 00:16:38.675 "state": "configuring", 00:16:38.675 "raid_level": "raid5f", 00:16:38.675 "superblock": true, 00:16:38.675 "num_base_bdevs": 4, 00:16:38.675 "num_base_bdevs_discovered": 1, 00:16:38.675 "num_base_bdevs_operational": 4, 00:16:38.675 "base_bdevs_list": [ 00:16:38.675 { 00:16:38.675 "name": "pt1", 00:16:38.675 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:38.675 "is_configured": true, 00:16:38.675 "data_offset": 2048, 00:16:38.675 "data_size": 63488 00:16:38.675 }, 00:16:38.675 { 00:16:38.675 "name": null, 00:16:38.675 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:38.675 "is_configured": false, 00:16:38.675 "data_offset": 0, 00:16:38.675 "data_size": 63488 00:16:38.675 }, 00:16:38.675 { 00:16:38.675 "name": null, 00:16:38.675 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:38.675 "is_configured": false, 00:16:38.675 "data_offset": 2048, 00:16:38.675 "data_size": 63488 00:16:38.675 }, 00:16:38.675 { 00:16:38.675 "name": null, 00:16:38.675 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:38.675 "is_configured": false, 00:16:38.675 "data_offset": 2048, 00:16:38.675 "data_size": 63488 00:16:38.675 } 00:16:38.675 ] 00:16:38.675 }' 00:16:38.675 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.675 09:15:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.934 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:38.934 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:38.934 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:38.934 09:15:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.934 09:15:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.934 [2024-10-15 09:15:56.750158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:38.934 [2024-10-15 09:15:56.750287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.934 [2024-10-15 09:15:56.750333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:38.934 [2024-10-15 09:15:56.750383] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.934 [2024-10-15 09:15:56.750930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.934 [2024-10-15 09:15:56.750997] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:38.934 [2024-10-15 09:15:56.751126] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:38.934 [2024-10-15 09:15:56.751186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:38.934 pt2 00:16:38.934 09:15:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.934 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:38.934 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:38.934 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:38.934 09:15:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.934 09:15:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.934 [2024-10-15 09:15:56.762108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:38.934 [2024-10-15 09:15:56.762208] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.934 [2024-10-15 09:15:56.762253] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:38.934 [2024-10-15 09:15:56.762292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.934 [2024-10-15 09:15:56.762758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.934 [2024-10-15 09:15:56.762818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:38.934 [2024-10-15 09:15:56.762922] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:38.934 [2024-10-15 09:15:56.762976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:38.934 pt3 00:16:38.934 09:15:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.934 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:38.934 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:38.934 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:38.934 09:15:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.934 09:15:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.934 [2024-10-15 09:15:56.774057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:38.934 [2024-10-15 09:15:56.774111] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.934 [2024-10-15 09:15:56.774131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:38.934 [2024-10-15 09:15:56.774141] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.934 [2024-10-15 09:15:56.774538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.934 [2024-10-15 09:15:56.774555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:38.934 [2024-10-15 09:15:56.774621] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:38.934 [2024-10-15 09:15:56.774641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:38.934 [2024-10-15 09:15:56.774827] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:38.934 [2024-10-15 09:15:56.774839] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:38.934 [2024-10-15 09:15:56.775117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:38.934 pt4 00:16:38.934 [2024-10-15 09:15:56.783834] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:38.934 [2024-10-15 09:15:56.783860] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:38.934 [2024-10-15 09:15:56.784050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.934 09:15:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.934 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:38.934 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:38.934 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:38.934 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.934 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.934 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.934 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.934 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:38.934 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.934 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.934 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.934 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.934 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.934 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.935 09:15:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.935 09:15:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.935 09:15:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.194 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.194 "name": "raid_bdev1", 00:16:39.194 "uuid": "4c878592-df13-4f19-83ce-102780b3a6ad", 00:16:39.194 "strip_size_kb": 64, 00:16:39.194 "state": "online", 00:16:39.194 "raid_level": "raid5f", 00:16:39.194 "superblock": true, 00:16:39.194 "num_base_bdevs": 4, 00:16:39.194 "num_base_bdevs_discovered": 4, 00:16:39.194 "num_base_bdevs_operational": 4, 00:16:39.194 "base_bdevs_list": [ 00:16:39.194 { 00:16:39.194 "name": "pt1", 00:16:39.194 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:39.194 "is_configured": true, 00:16:39.194 "data_offset": 2048, 00:16:39.194 "data_size": 63488 00:16:39.194 }, 00:16:39.194 { 00:16:39.194 "name": "pt2", 00:16:39.194 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:39.194 "is_configured": true, 00:16:39.194 "data_offset": 2048, 00:16:39.194 "data_size": 63488 00:16:39.194 }, 00:16:39.194 { 00:16:39.194 "name": "pt3", 00:16:39.194 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:39.194 "is_configured": true, 00:16:39.194 "data_offset": 2048, 00:16:39.194 "data_size": 63488 00:16:39.194 }, 00:16:39.194 { 00:16:39.194 "name": "pt4", 00:16:39.194 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:39.194 "is_configured": true, 00:16:39.194 "data_offset": 2048, 00:16:39.194 "data_size": 63488 00:16:39.194 } 00:16:39.194 ] 00:16:39.194 }' 00:16:39.194 09:15:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.194 09:15:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.453 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:39.453 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:39.453 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:39.453 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:39.453 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:39.453 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:39.453 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:39.453 09:15:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.453 09:15:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.453 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:39.453 [2024-10-15 09:15:57.297568] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:39.453 09:15:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.453 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:39.453 "name": "raid_bdev1", 00:16:39.453 "aliases": [ 00:16:39.453 "4c878592-df13-4f19-83ce-102780b3a6ad" 00:16:39.453 ], 00:16:39.453 "product_name": "Raid Volume", 00:16:39.453 "block_size": 512, 00:16:39.453 "num_blocks": 190464, 00:16:39.453 "uuid": "4c878592-df13-4f19-83ce-102780b3a6ad", 00:16:39.453 "assigned_rate_limits": { 00:16:39.453 "rw_ios_per_sec": 0, 00:16:39.453 "rw_mbytes_per_sec": 0, 00:16:39.453 "r_mbytes_per_sec": 0, 00:16:39.453 "w_mbytes_per_sec": 0 00:16:39.453 }, 00:16:39.453 "claimed": false, 00:16:39.453 "zoned": false, 00:16:39.453 "supported_io_types": { 00:16:39.453 "read": true, 00:16:39.453 "write": true, 00:16:39.453 "unmap": false, 00:16:39.453 "flush": false, 00:16:39.453 "reset": true, 00:16:39.453 "nvme_admin": false, 00:16:39.453 "nvme_io": false, 00:16:39.453 "nvme_io_md": false, 00:16:39.453 "write_zeroes": true, 00:16:39.453 "zcopy": false, 00:16:39.453 "get_zone_info": false, 00:16:39.453 "zone_management": false, 00:16:39.453 "zone_append": false, 00:16:39.453 "compare": false, 00:16:39.453 "compare_and_write": false, 00:16:39.453 "abort": false, 00:16:39.453 "seek_hole": false, 00:16:39.453 "seek_data": false, 00:16:39.453 "copy": false, 00:16:39.453 "nvme_iov_md": false 00:16:39.453 }, 00:16:39.453 "driver_specific": { 00:16:39.453 "raid": { 00:16:39.453 "uuid": "4c878592-df13-4f19-83ce-102780b3a6ad", 00:16:39.453 "strip_size_kb": 64, 00:16:39.453 "state": "online", 00:16:39.453 "raid_level": "raid5f", 00:16:39.453 "superblock": true, 00:16:39.453 "num_base_bdevs": 4, 00:16:39.453 "num_base_bdevs_discovered": 4, 00:16:39.453 "num_base_bdevs_operational": 4, 00:16:39.453 "base_bdevs_list": [ 00:16:39.453 { 00:16:39.453 "name": "pt1", 00:16:39.453 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:39.453 "is_configured": true, 00:16:39.453 "data_offset": 2048, 00:16:39.453 "data_size": 63488 00:16:39.453 }, 00:16:39.453 { 00:16:39.453 "name": "pt2", 00:16:39.453 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:39.453 "is_configured": true, 00:16:39.453 "data_offset": 2048, 00:16:39.453 "data_size": 63488 00:16:39.453 }, 00:16:39.453 { 00:16:39.453 "name": "pt3", 00:16:39.453 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:39.454 "is_configured": true, 00:16:39.454 "data_offset": 2048, 00:16:39.454 "data_size": 63488 00:16:39.454 }, 00:16:39.454 { 00:16:39.454 "name": "pt4", 00:16:39.454 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:39.454 "is_configured": true, 00:16:39.454 "data_offset": 2048, 00:16:39.454 "data_size": 63488 00:16:39.454 } 00:16:39.454 ] 00:16:39.454 } 00:16:39.454 } 00:16:39.454 }' 00:16:39.454 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:39.713 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:39.713 pt2 00:16:39.713 pt3 00:16:39.713 pt4' 00:16:39.713 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.713 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:39.713 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.713 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:39.713 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.713 09:15:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.713 09:15:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.713 09:15:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.713 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.713 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.713 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.713 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:39.713 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.713 09:15:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.713 09:15:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.713 09:15:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.713 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.713 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.713 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.713 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:39.713 09:15:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.713 09:15:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.713 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.713 09:15:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.713 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.713 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.713 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.713 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:39.713 09:15:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.713 09:15:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.713 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.973 09:15:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.973 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.973 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.973 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:39.973 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:39.973 09:15:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.973 09:15:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.973 [2024-10-15 09:15:57.640937] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:39.973 09:15:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.973 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4c878592-df13-4f19-83ce-102780b3a6ad '!=' 4c878592-df13-4f19-83ce-102780b3a6ad ']' 00:16:39.973 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:39.973 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:39.973 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:39.973 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:39.973 09:15:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.973 09:15:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.973 [2024-10-15 09:15:57.692689] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:39.973 09:15:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.973 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:39.973 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.973 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.973 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:39.973 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:39.973 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:39.973 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.973 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.973 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.973 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.973 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.973 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.973 09:15:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.974 09:15:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.974 09:15:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.974 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.974 "name": "raid_bdev1", 00:16:39.974 "uuid": "4c878592-df13-4f19-83ce-102780b3a6ad", 00:16:39.974 "strip_size_kb": 64, 00:16:39.974 "state": "online", 00:16:39.974 "raid_level": "raid5f", 00:16:39.974 "superblock": true, 00:16:39.974 "num_base_bdevs": 4, 00:16:39.974 "num_base_bdevs_discovered": 3, 00:16:39.974 "num_base_bdevs_operational": 3, 00:16:39.974 "base_bdevs_list": [ 00:16:39.974 { 00:16:39.974 "name": null, 00:16:39.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.974 "is_configured": false, 00:16:39.974 "data_offset": 0, 00:16:39.974 "data_size": 63488 00:16:39.974 }, 00:16:39.974 { 00:16:39.974 "name": "pt2", 00:16:39.974 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:39.974 "is_configured": true, 00:16:39.974 "data_offset": 2048, 00:16:39.974 "data_size": 63488 00:16:39.974 }, 00:16:39.974 { 00:16:39.974 "name": "pt3", 00:16:39.974 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:39.974 "is_configured": true, 00:16:39.974 "data_offset": 2048, 00:16:39.974 "data_size": 63488 00:16:39.974 }, 00:16:39.974 { 00:16:39.974 "name": "pt4", 00:16:39.974 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:39.974 "is_configured": true, 00:16:39.974 "data_offset": 2048, 00:16:39.974 "data_size": 63488 00:16:39.974 } 00:16:39.974 ] 00:16:39.974 }' 00:16:39.974 09:15:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.974 09:15:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.546 [2024-10-15 09:15:58.207831] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:40.546 [2024-10-15 09:15:58.207866] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:40.546 [2024-10-15 09:15:58.207957] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:40.546 [2024-10-15 09:15:58.208046] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:40.546 [2024-10-15 09:15:58.208057] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.546 [2024-10-15 09:15:58.303637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:40.546 [2024-10-15 09:15:58.303780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.546 [2024-10-15 09:15:58.303812] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:40.546 [2024-10-15 09:15:58.303823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.546 [2024-10-15 09:15:58.306357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.546 [2024-10-15 09:15:58.306399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:40.546 [2024-10-15 09:15:58.306495] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:40.546 [2024-10-15 09:15:58.306555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:40.546 pt2 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.546 "name": "raid_bdev1", 00:16:40.546 "uuid": "4c878592-df13-4f19-83ce-102780b3a6ad", 00:16:40.546 "strip_size_kb": 64, 00:16:40.546 "state": "configuring", 00:16:40.546 "raid_level": "raid5f", 00:16:40.546 "superblock": true, 00:16:40.546 "num_base_bdevs": 4, 00:16:40.546 "num_base_bdevs_discovered": 1, 00:16:40.546 "num_base_bdevs_operational": 3, 00:16:40.546 "base_bdevs_list": [ 00:16:40.546 { 00:16:40.546 "name": null, 00:16:40.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.546 "is_configured": false, 00:16:40.546 "data_offset": 2048, 00:16:40.546 "data_size": 63488 00:16:40.546 }, 00:16:40.546 { 00:16:40.546 "name": "pt2", 00:16:40.546 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:40.546 "is_configured": true, 00:16:40.546 "data_offset": 2048, 00:16:40.546 "data_size": 63488 00:16:40.546 }, 00:16:40.546 { 00:16:40.546 "name": null, 00:16:40.546 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:40.546 "is_configured": false, 00:16:40.546 "data_offset": 2048, 00:16:40.546 "data_size": 63488 00:16:40.546 }, 00:16:40.546 { 00:16:40.546 "name": null, 00:16:40.546 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:40.546 "is_configured": false, 00:16:40.546 "data_offset": 2048, 00:16:40.546 "data_size": 63488 00:16:40.546 } 00:16:40.546 ] 00:16:40.546 }' 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.546 09:15:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.143 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:41.143 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:41.143 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:41.143 09:15:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.143 09:15:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.143 [2024-10-15 09:15:58.730971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:41.143 [2024-10-15 09:15:58.731092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.143 [2024-10-15 09:15:58.731134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:41.143 [2024-10-15 09:15:58.731162] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.143 [2024-10-15 09:15:58.731688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.143 [2024-10-15 09:15:58.731769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:41.143 [2024-10-15 09:15:58.731898] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:41.143 [2024-10-15 09:15:58.731962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:41.143 pt3 00:16:41.143 09:15:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.143 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:41.143 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.143 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:41.143 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.143 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.143 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:41.143 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.143 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.143 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.143 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.143 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.143 09:15:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.143 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.143 09:15:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.144 09:15:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.144 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.144 "name": "raid_bdev1", 00:16:41.144 "uuid": "4c878592-df13-4f19-83ce-102780b3a6ad", 00:16:41.144 "strip_size_kb": 64, 00:16:41.144 "state": "configuring", 00:16:41.144 "raid_level": "raid5f", 00:16:41.144 "superblock": true, 00:16:41.144 "num_base_bdevs": 4, 00:16:41.144 "num_base_bdevs_discovered": 2, 00:16:41.144 "num_base_bdevs_operational": 3, 00:16:41.144 "base_bdevs_list": [ 00:16:41.144 { 00:16:41.144 "name": null, 00:16:41.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.144 "is_configured": false, 00:16:41.144 "data_offset": 2048, 00:16:41.144 "data_size": 63488 00:16:41.144 }, 00:16:41.144 { 00:16:41.144 "name": "pt2", 00:16:41.144 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:41.144 "is_configured": true, 00:16:41.144 "data_offset": 2048, 00:16:41.144 "data_size": 63488 00:16:41.144 }, 00:16:41.144 { 00:16:41.144 "name": "pt3", 00:16:41.144 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:41.144 "is_configured": true, 00:16:41.144 "data_offset": 2048, 00:16:41.144 "data_size": 63488 00:16:41.144 }, 00:16:41.144 { 00:16:41.144 "name": null, 00:16:41.144 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:41.144 "is_configured": false, 00:16:41.144 "data_offset": 2048, 00:16:41.144 "data_size": 63488 00:16:41.144 } 00:16:41.144 ] 00:16:41.144 }' 00:16:41.144 09:15:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.144 09:15:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.415 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:41.415 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:41.415 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:41.415 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:41.415 09:15:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.415 09:15:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.415 [2024-10-15 09:15:59.174211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:41.415 [2024-10-15 09:15:59.174300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.415 [2024-10-15 09:15:59.174326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:41.415 [2024-10-15 09:15:59.174335] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.415 [2024-10-15 09:15:59.174846] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.415 [2024-10-15 09:15:59.174865] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:41.415 [2024-10-15 09:15:59.174959] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:41.415 [2024-10-15 09:15:59.174984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:41.415 [2024-10-15 09:15:59.175130] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:41.415 [2024-10-15 09:15:59.175145] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:41.415 [2024-10-15 09:15:59.175397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:41.415 [2024-10-15 09:15:59.182196] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:41.415 pt4 00:16:41.415 [2024-10-15 09:15:59.182267] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:41.415 [2024-10-15 09:15:59.182597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.415 09:15:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.415 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:41.415 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.415 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.415 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.415 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.415 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:41.415 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.415 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.416 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.416 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.416 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.416 09:15:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.416 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.416 09:15:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.416 09:15:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.416 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.416 "name": "raid_bdev1", 00:16:41.416 "uuid": "4c878592-df13-4f19-83ce-102780b3a6ad", 00:16:41.416 "strip_size_kb": 64, 00:16:41.416 "state": "online", 00:16:41.416 "raid_level": "raid5f", 00:16:41.416 "superblock": true, 00:16:41.416 "num_base_bdevs": 4, 00:16:41.416 "num_base_bdevs_discovered": 3, 00:16:41.416 "num_base_bdevs_operational": 3, 00:16:41.416 "base_bdevs_list": [ 00:16:41.416 { 00:16:41.416 "name": null, 00:16:41.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.416 "is_configured": false, 00:16:41.416 "data_offset": 2048, 00:16:41.416 "data_size": 63488 00:16:41.416 }, 00:16:41.416 { 00:16:41.416 "name": "pt2", 00:16:41.416 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:41.416 "is_configured": true, 00:16:41.416 "data_offset": 2048, 00:16:41.416 "data_size": 63488 00:16:41.416 }, 00:16:41.416 { 00:16:41.416 "name": "pt3", 00:16:41.416 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:41.416 "is_configured": true, 00:16:41.416 "data_offset": 2048, 00:16:41.416 "data_size": 63488 00:16:41.416 }, 00:16:41.416 { 00:16:41.416 "name": "pt4", 00:16:41.416 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:41.416 "is_configured": true, 00:16:41.416 "data_offset": 2048, 00:16:41.416 "data_size": 63488 00:16:41.416 } 00:16:41.416 ] 00:16:41.416 }' 00:16:41.416 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.416 09:15:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.983 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:41.983 09:15:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.983 09:15:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.983 [2024-10-15 09:15:59.614948] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:41.984 [2024-10-15 09:15:59.615023] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:41.984 [2024-10-15 09:15:59.615129] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:41.984 [2024-10-15 09:15:59.615220] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:41.984 [2024-10-15 09:15:59.615270] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.984 [2024-10-15 09:15:59.674820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:41.984 [2024-10-15 09:15:59.674932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.984 [2024-10-15 09:15:59.674979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:41.984 [2024-10-15 09:15:59.675017] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.984 [2024-10-15 09:15:59.677414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.984 [2024-10-15 09:15:59.677492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:41.984 [2024-10-15 09:15:59.677635] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:41.984 [2024-10-15 09:15:59.677753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:41.984 [2024-10-15 09:15:59.677979] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:41.984 [2024-10-15 09:15:59.678030] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:41.984 [2024-10-15 09:15:59.678051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:41.984 [2024-10-15 09:15:59.678129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:41.984 [2024-10-15 09:15:59.678264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:41.984 pt1 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.984 "name": "raid_bdev1", 00:16:41.984 "uuid": "4c878592-df13-4f19-83ce-102780b3a6ad", 00:16:41.984 "strip_size_kb": 64, 00:16:41.984 "state": "configuring", 00:16:41.984 "raid_level": "raid5f", 00:16:41.984 "superblock": true, 00:16:41.984 "num_base_bdevs": 4, 00:16:41.984 "num_base_bdevs_discovered": 2, 00:16:41.984 "num_base_bdevs_operational": 3, 00:16:41.984 "base_bdevs_list": [ 00:16:41.984 { 00:16:41.984 "name": null, 00:16:41.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.984 "is_configured": false, 00:16:41.984 "data_offset": 2048, 00:16:41.984 "data_size": 63488 00:16:41.984 }, 00:16:41.984 { 00:16:41.984 "name": "pt2", 00:16:41.984 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:41.984 "is_configured": true, 00:16:41.984 "data_offset": 2048, 00:16:41.984 "data_size": 63488 00:16:41.984 }, 00:16:41.984 { 00:16:41.984 "name": "pt3", 00:16:41.984 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:41.984 "is_configured": true, 00:16:41.984 "data_offset": 2048, 00:16:41.984 "data_size": 63488 00:16:41.984 }, 00:16:41.984 { 00:16:41.984 "name": null, 00:16:41.984 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:41.984 "is_configured": false, 00:16:41.984 "data_offset": 2048, 00:16:41.984 "data_size": 63488 00:16:41.984 } 00:16:41.984 ] 00:16:41.984 }' 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.984 09:15:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.551 09:16:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:42.551 09:16:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.551 09:16:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.551 09:16:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:42.551 09:16:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.551 09:16:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:42.551 09:16:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:42.551 09:16:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.551 09:16:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.551 [2024-10-15 09:16:00.221977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:42.551 [2024-10-15 09:16:00.222049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.551 [2024-10-15 09:16:00.222079] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:42.551 [2024-10-15 09:16:00.222091] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.551 [2024-10-15 09:16:00.222596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.551 [2024-10-15 09:16:00.222616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:42.551 [2024-10-15 09:16:00.222733] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:42.551 [2024-10-15 09:16:00.222763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:42.551 [2024-10-15 09:16:00.222913] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:42.551 [2024-10-15 09:16:00.222930] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:42.551 [2024-10-15 09:16:00.223209] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:42.551 [2024-10-15 09:16:00.231173] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:42.551 [2024-10-15 09:16:00.231198] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:42.551 [2024-10-15 09:16:00.231456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.551 pt4 00:16:42.551 09:16:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.551 09:16:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:42.551 09:16:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.551 09:16:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.551 09:16:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.551 09:16:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.551 09:16:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:42.551 09:16:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.551 09:16:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.551 09:16:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.551 09:16:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.551 09:16:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.552 09:16:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.552 09:16:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.552 09:16:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.552 09:16:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.552 09:16:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.552 "name": "raid_bdev1", 00:16:42.552 "uuid": "4c878592-df13-4f19-83ce-102780b3a6ad", 00:16:42.552 "strip_size_kb": 64, 00:16:42.552 "state": "online", 00:16:42.552 "raid_level": "raid5f", 00:16:42.552 "superblock": true, 00:16:42.552 "num_base_bdevs": 4, 00:16:42.552 "num_base_bdevs_discovered": 3, 00:16:42.552 "num_base_bdevs_operational": 3, 00:16:42.552 "base_bdevs_list": [ 00:16:42.552 { 00:16:42.552 "name": null, 00:16:42.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.552 "is_configured": false, 00:16:42.552 "data_offset": 2048, 00:16:42.552 "data_size": 63488 00:16:42.552 }, 00:16:42.552 { 00:16:42.552 "name": "pt2", 00:16:42.552 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:42.552 "is_configured": true, 00:16:42.552 "data_offset": 2048, 00:16:42.552 "data_size": 63488 00:16:42.552 }, 00:16:42.552 { 00:16:42.552 "name": "pt3", 00:16:42.552 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:42.552 "is_configured": true, 00:16:42.552 "data_offset": 2048, 00:16:42.552 "data_size": 63488 00:16:42.552 }, 00:16:42.552 { 00:16:42.552 "name": "pt4", 00:16:42.552 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:42.552 "is_configured": true, 00:16:42.552 "data_offset": 2048, 00:16:42.552 "data_size": 63488 00:16:42.552 } 00:16:42.552 ] 00:16:42.552 }' 00:16:42.552 09:16:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.552 09:16:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.808 09:16:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:42.808 09:16:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:42.808 09:16:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.808 09:16:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.808 09:16:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.071 09:16:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:43.071 09:16:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:43.071 09:16:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:43.071 09:16:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.071 09:16:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.071 [2024-10-15 09:16:00.723915] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:43.071 09:16:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.071 09:16:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 4c878592-df13-4f19-83ce-102780b3a6ad '!=' 4c878592-df13-4f19-83ce-102780b3a6ad ']' 00:16:43.071 09:16:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84419 00:16:43.071 09:16:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 84419 ']' 00:16:43.071 09:16:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 84419 00:16:43.071 09:16:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:16:43.071 09:16:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:43.071 09:16:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84419 00:16:43.071 09:16:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:43.071 killing process with pid 84419 00:16:43.071 09:16:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:43.071 09:16:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84419' 00:16:43.071 09:16:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 84419 00:16:43.071 [2024-10-15 09:16:00.803119] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:43.071 [2024-10-15 09:16:00.803235] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:43.071 09:16:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 84419 00:16:43.071 [2024-10-15 09:16:00.803320] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:43.071 [2024-10-15 09:16:00.803333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:43.331 [2024-10-15 09:16:01.203619] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:44.829 09:16:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:44.829 00:16:44.829 real 0m8.881s 00:16:44.829 user 0m13.993s 00:16:44.829 sys 0m1.620s 00:16:44.829 09:16:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:44.829 ************************************ 00:16:44.829 END TEST raid5f_superblock_test 00:16:44.829 ************************************ 00:16:44.829 09:16:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.829 09:16:02 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:44.829 09:16:02 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:16:44.829 09:16:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:44.829 09:16:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:44.829 09:16:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:44.829 ************************************ 00:16:44.829 START TEST raid5f_rebuild_test 00:16:44.829 ************************************ 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84904 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84904 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 84904 ']' 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:44.829 09:16:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.829 [2024-10-15 09:16:02.525760] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:16:44.829 [2024-10-15 09:16:02.525984] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84904 ] 00:16:44.830 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:44.830 Zero copy mechanism will not be used. 00:16:44.830 [2024-10-15 09:16:02.690209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.089 [2024-10-15 09:16:02.810084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.349 [2024-10-15 09:16:03.026868] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.349 [2024-10-15 09:16:03.027023] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.608 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:45.608 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:16:45.608 09:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:45.608 09:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:45.608 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.608 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.608 BaseBdev1_malloc 00:16:45.608 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.608 09:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:45.608 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.608 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.608 [2024-10-15 09:16:03.448378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:45.608 [2024-10-15 09:16:03.448453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.608 [2024-10-15 09:16:03.448479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:45.608 [2024-10-15 09:16:03.448491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.608 [2024-10-15 09:16:03.450985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.609 [2024-10-15 09:16:03.451030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:45.609 BaseBdev1 00:16:45.609 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.609 09:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:45.609 09:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:45.609 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.609 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.609 BaseBdev2_malloc 00:16:45.609 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.609 09:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:45.609 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.609 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.869 [2024-10-15 09:16:03.505867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:45.869 [2024-10-15 09:16:03.506000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.869 [2024-10-15 09:16:03.506029] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:45.869 [2024-10-15 09:16:03.506042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.869 [2024-10-15 09:16:03.508456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.869 [2024-10-15 09:16:03.508502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:45.869 BaseBdev2 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.869 BaseBdev3_malloc 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.869 [2024-10-15 09:16:03.589830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:45.869 [2024-10-15 09:16:03.589894] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.869 [2024-10-15 09:16:03.589921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:45.869 [2024-10-15 09:16:03.589933] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.869 [2024-10-15 09:16:03.592272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.869 [2024-10-15 09:16:03.592375] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:45.869 BaseBdev3 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.869 BaseBdev4_malloc 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.869 [2024-10-15 09:16:03.647400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:45.869 [2024-10-15 09:16:03.647467] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.869 [2024-10-15 09:16:03.647490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:45.869 [2024-10-15 09:16:03.647500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.869 [2024-10-15 09:16:03.649801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.869 [2024-10-15 09:16:03.649893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:45.869 BaseBdev4 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.869 spare_malloc 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.869 spare_delay 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.869 [2024-10-15 09:16:03.716723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:45.869 [2024-10-15 09:16:03.716828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.869 [2024-10-15 09:16:03.716855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:45.869 [2024-10-15 09:16:03.716865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.869 [2024-10-15 09:16:03.719278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.869 [2024-10-15 09:16:03.719320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:45.869 spare 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.869 [2024-10-15 09:16:03.728769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:45.869 [2024-10-15 09:16:03.730828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:45.869 [2024-10-15 09:16:03.730961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:45.869 [2024-10-15 09:16:03.731031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:45.869 [2024-10-15 09:16:03.731138] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:45.869 [2024-10-15 09:16:03.731153] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:45.869 [2024-10-15 09:16:03.731455] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:45.869 [2024-10-15 09:16:03.740198] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:45.869 [2024-10-15 09:16:03.740255] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:45.869 [2024-10-15 09:16:03.740510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:45.869 09:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.870 09:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.870 09:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.870 09:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.870 09:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.870 09:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.870 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.870 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.130 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.130 09:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.130 "name": "raid_bdev1", 00:16:46.130 "uuid": "f3e8e767-54e1-495b-a1c4-8d4de6c20761", 00:16:46.130 "strip_size_kb": 64, 00:16:46.130 "state": "online", 00:16:46.130 "raid_level": "raid5f", 00:16:46.130 "superblock": false, 00:16:46.130 "num_base_bdevs": 4, 00:16:46.130 "num_base_bdevs_discovered": 4, 00:16:46.130 "num_base_bdevs_operational": 4, 00:16:46.130 "base_bdevs_list": [ 00:16:46.130 { 00:16:46.130 "name": "BaseBdev1", 00:16:46.130 "uuid": "780c6aa0-e207-5b90-b2b4-ba24574c5c81", 00:16:46.130 "is_configured": true, 00:16:46.130 "data_offset": 0, 00:16:46.130 "data_size": 65536 00:16:46.130 }, 00:16:46.130 { 00:16:46.130 "name": "BaseBdev2", 00:16:46.130 "uuid": "397d8c51-63dc-5330-bea7-cd61fa0b4d61", 00:16:46.130 "is_configured": true, 00:16:46.130 "data_offset": 0, 00:16:46.130 "data_size": 65536 00:16:46.130 }, 00:16:46.130 { 00:16:46.130 "name": "BaseBdev3", 00:16:46.130 "uuid": "e33b6668-0977-5fbb-827a-2c454ba3ceca", 00:16:46.130 "is_configured": true, 00:16:46.130 "data_offset": 0, 00:16:46.130 "data_size": 65536 00:16:46.130 }, 00:16:46.130 { 00:16:46.130 "name": "BaseBdev4", 00:16:46.130 "uuid": "cd983016-cbf0-56c2-a7ef-9424fba7b53e", 00:16:46.130 "is_configured": true, 00:16:46.130 "data_offset": 0, 00:16:46.130 "data_size": 65536 00:16:46.130 } 00:16:46.130 ] 00:16:46.130 }' 00:16:46.130 09:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.130 09:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.389 09:16:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:46.389 09:16:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:46.389 09:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.389 09:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.389 [2024-10-15 09:16:04.225459] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:46.389 09:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.389 09:16:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:16:46.389 09:16:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.389 09:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.389 09:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.389 09:16:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:46.389 09:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.649 09:16:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:46.649 09:16:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:46.649 09:16:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:46.649 09:16:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:46.649 09:16:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:46.649 09:16:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:46.649 09:16:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:46.649 09:16:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:46.649 09:16:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:46.649 09:16:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:46.649 09:16:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:46.649 09:16:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:46.649 09:16:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:46.649 09:16:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:46.649 [2024-10-15 09:16:04.492874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:46.649 /dev/nbd0 00:16:46.910 09:16:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:46.910 09:16:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:46.910 09:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:46.910 09:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:46.910 09:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:46.910 09:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:46.910 09:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:46.910 09:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:46.910 09:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:46.910 09:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:46.910 09:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:46.910 1+0 records in 00:16:46.910 1+0 records out 00:16:46.910 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401246 s, 10.2 MB/s 00:16:46.910 09:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:46.910 09:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:46.910 09:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:46.910 09:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:46.910 09:16:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:46.910 09:16:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:46.910 09:16:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:46.910 09:16:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:46.910 09:16:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:46.910 09:16:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:46.910 09:16:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:16:47.503 512+0 records in 00:16:47.503 512+0 records out 00:16:47.503 100663296 bytes (101 MB, 96 MiB) copied, 0.524544 s, 192 MB/s 00:16:47.503 09:16:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:47.503 09:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:47.503 09:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:47.503 09:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:47.503 09:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:47.503 09:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:47.503 09:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:47.503 09:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:47.503 [2024-10-15 09:16:05.351040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.503 09:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:47.503 09:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:47.503 09:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:47.503 09:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:47.503 09:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:47.503 09:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:47.503 09:16:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:47.503 09:16:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:47.503 09:16:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.503 09:16:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.503 [2024-10-15 09:16:05.370654] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:47.503 09:16:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.503 09:16:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:47.503 09:16:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.503 09:16:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.503 09:16:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.503 09:16:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.503 09:16:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:47.503 09:16:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.503 09:16:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.503 09:16:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.503 09:16:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.503 09:16:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.503 09:16:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.503 09:16:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.503 09:16:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.762 09:16:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.762 09:16:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.762 "name": "raid_bdev1", 00:16:47.762 "uuid": "f3e8e767-54e1-495b-a1c4-8d4de6c20761", 00:16:47.762 "strip_size_kb": 64, 00:16:47.762 "state": "online", 00:16:47.762 "raid_level": "raid5f", 00:16:47.762 "superblock": false, 00:16:47.762 "num_base_bdevs": 4, 00:16:47.762 "num_base_bdevs_discovered": 3, 00:16:47.762 "num_base_bdevs_operational": 3, 00:16:47.762 "base_bdevs_list": [ 00:16:47.762 { 00:16:47.762 "name": null, 00:16:47.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.762 "is_configured": false, 00:16:47.762 "data_offset": 0, 00:16:47.762 "data_size": 65536 00:16:47.762 }, 00:16:47.762 { 00:16:47.762 "name": "BaseBdev2", 00:16:47.762 "uuid": "397d8c51-63dc-5330-bea7-cd61fa0b4d61", 00:16:47.762 "is_configured": true, 00:16:47.762 "data_offset": 0, 00:16:47.762 "data_size": 65536 00:16:47.762 }, 00:16:47.762 { 00:16:47.762 "name": "BaseBdev3", 00:16:47.762 "uuid": "e33b6668-0977-5fbb-827a-2c454ba3ceca", 00:16:47.762 "is_configured": true, 00:16:47.762 "data_offset": 0, 00:16:47.762 "data_size": 65536 00:16:47.762 }, 00:16:47.762 { 00:16:47.762 "name": "BaseBdev4", 00:16:47.762 "uuid": "cd983016-cbf0-56c2-a7ef-9424fba7b53e", 00:16:47.762 "is_configured": true, 00:16:47.762 "data_offset": 0, 00:16:47.762 "data_size": 65536 00:16:47.762 } 00:16:47.762 ] 00:16:47.762 }' 00:16:47.762 09:16:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.762 09:16:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.021 09:16:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:48.021 09:16:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.021 09:16:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.021 [2024-10-15 09:16:05.785949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:48.021 [2024-10-15 09:16:05.804854] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:48.021 09:16:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.021 09:16:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:48.021 [2024-10-15 09:16:05.817007] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:48.960 09:16:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:48.960 09:16:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.960 09:16:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:48.960 09:16:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:48.960 09:16:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.961 09:16:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.961 09:16:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.961 09:16:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.961 09:16:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.961 09:16:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.220 09:16:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.220 "name": "raid_bdev1", 00:16:49.220 "uuid": "f3e8e767-54e1-495b-a1c4-8d4de6c20761", 00:16:49.220 "strip_size_kb": 64, 00:16:49.220 "state": "online", 00:16:49.220 "raid_level": "raid5f", 00:16:49.220 "superblock": false, 00:16:49.220 "num_base_bdevs": 4, 00:16:49.220 "num_base_bdevs_discovered": 4, 00:16:49.220 "num_base_bdevs_operational": 4, 00:16:49.220 "process": { 00:16:49.220 "type": "rebuild", 00:16:49.220 "target": "spare", 00:16:49.220 "progress": { 00:16:49.220 "blocks": 17280, 00:16:49.220 "percent": 8 00:16:49.220 } 00:16:49.220 }, 00:16:49.220 "base_bdevs_list": [ 00:16:49.220 { 00:16:49.220 "name": "spare", 00:16:49.220 "uuid": "a3e80509-b0f3-592e-9d4c-63b4785529ff", 00:16:49.220 "is_configured": true, 00:16:49.220 "data_offset": 0, 00:16:49.220 "data_size": 65536 00:16:49.220 }, 00:16:49.220 { 00:16:49.220 "name": "BaseBdev2", 00:16:49.220 "uuid": "397d8c51-63dc-5330-bea7-cd61fa0b4d61", 00:16:49.220 "is_configured": true, 00:16:49.220 "data_offset": 0, 00:16:49.220 "data_size": 65536 00:16:49.220 }, 00:16:49.220 { 00:16:49.220 "name": "BaseBdev3", 00:16:49.220 "uuid": "e33b6668-0977-5fbb-827a-2c454ba3ceca", 00:16:49.220 "is_configured": true, 00:16:49.220 "data_offset": 0, 00:16:49.220 "data_size": 65536 00:16:49.220 }, 00:16:49.220 { 00:16:49.220 "name": "BaseBdev4", 00:16:49.220 "uuid": "cd983016-cbf0-56c2-a7ef-9424fba7b53e", 00:16:49.220 "is_configured": true, 00:16:49.220 "data_offset": 0, 00:16:49.220 "data_size": 65536 00:16:49.220 } 00:16:49.220 ] 00:16:49.220 }' 00:16:49.220 09:16:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.220 09:16:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:49.220 09:16:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.220 09:16:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:49.220 09:16:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:49.220 09:16:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.220 09:16:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.220 [2024-10-15 09:16:06.932911] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:49.220 [2024-10-15 09:16:07.027099] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:49.220 [2024-10-15 09:16:07.027302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:49.220 [2024-10-15 09:16:07.027353] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:49.220 [2024-10-15 09:16:07.027383] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:49.220 09:16:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.220 09:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:49.220 09:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.220 09:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.220 09:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.220 09:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.220 09:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:49.220 09:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.221 09:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.221 09:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.221 09:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.221 09:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.221 09:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.221 09:16:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.221 09:16:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.221 09:16:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.480 09:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.480 "name": "raid_bdev1", 00:16:49.480 "uuid": "f3e8e767-54e1-495b-a1c4-8d4de6c20761", 00:16:49.480 "strip_size_kb": 64, 00:16:49.480 "state": "online", 00:16:49.480 "raid_level": "raid5f", 00:16:49.480 "superblock": false, 00:16:49.480 "num_base_bdevs": 4, 00:16:49.480 "num_base_bdevs_discovered": 3, 00:16:49.480 "num_base_bdevs_operational": 3, 00:16:49.480 "base_bdevs_list": [ 00:16:49.480 { 00:16:49.480 "name": null, 00:16:49.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.480 "is_configured": false, 00:16:49.480 "data_offset": 0, 00:16:49.480 "data_size": 65536 00:16:49.480 }, 00:16:49.480 { 00:16:49.480 "name": "BaseBdev2", 00:16:49.480 "uuid": "397d8c51-63dc-5330-bea7-cd61fa0b4d61", 00:16:49.480 "is_configured": true, 00:16:49.480 "data_offset": 0, 00:16:49.480 "data_size": 65536 00:16:49.480 }, 00:16:49.480 { 00:16:49.480 "name": "BaseBdev3", 00:16:49.480 "uuid": "e33b6668-0977-5fbb-827a-2c454ba3ceca", 00:16:49.480 "is_configured": true, 00:16:49.480 "data_offset": 0, 00:16:49.480 "data_size": 65536 00:16:49.480 }, 00:16:49.480 { 00:16:49.480 "name": "BaseBdev4", 00:16:49.480 "uuid": "cd983016-cbf0-56c2-a7ef-9424fba7b53e", 00:16:49.480 "is_configured": true, 00:16:49.480 "data_offset": 0, 00:16:49.480 "data_size": 65536 00:16:49.480 } 00:16:49.480 ] 00:16:49.480 }' 00:16:49.480 09:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.480 09:16:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.738 09:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:49.738 09:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.738 09:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:49.738 09:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:49.738 09:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.738 09:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.738 09:16:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.738 09:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.738 09:16:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.738 09:16:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.738 09:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.738 "name": "raid_bdev1", 00:16:49.738 "uuid": "f3e8e767-54e1-495b-a1c4-8d4de6c20761", 00:16:49.738 "strip_size_kb": 64, 00:16:49.738 "state": "online", 00:16:49.738 "raid_level": "raid5f", 00:16:49.738 "superblock": false, 00:16:49.738 "num_base_bdevs": 4, 00:16:49.738 "num_base_bdevs_discovered": 3, 00:16:49.738 "num_base_bdevs_operational": 3, 00:16:49.738 "base_bdevs_list": [ 00:16:49.738 { 00:16:49.738 "name": null, 00:16:49.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.738 "is_configured": false, 00:16:49.738 "data_offset": 0, 00:16:49.738 "data_size": 65536 00:16:49.738 }, 00:16:49.738 { 00:16:49.738 "name": "BaseBdev2", 00:16:49.738 "uuid": "397d8c51-63dc-5330-bea7-cd61fa0b4d61", 00:16:49.738 "is_configured": true, 00:16:49.738 "data_offset": 0, 00:16:49.738 "data_size": 65536 00:16:49.738 }, 00:16:49.738 { 00:16:49.738 "name": "BaseBdev3", 00:16:49.738 "uuid": "e33b6668-0977-5fbb-827a-2c454ba3ceca", 00:16:49.738 "is_configured": true, 00:16:49.738 "data_offset": 0, 00:16:49.738 "data_size": 65536 00:16:49.738 }, 00:16:49.738 { 00:16:49.738 "name": "BaseBdev4", 00:16:49.738 "uuid": "cd983016-cbf0-56c2-a7ef-9424fba7b53e", 00:16:49.738 "is_configured": true, 00:16:49.738 "data_offset": 0, 00:16:49.738 "data_size": 65536 00:16:49.738 } 00:16:49.738 ] 00:16:49.738 }' 00:16:49.738 09:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.738 09:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:49.738 09:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.738 09:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:49.738 09:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:49.738 09:16:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.738 09:16:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.738 [2024-10-15 09:16:07.629405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:49.997 [2024-10-15 09:16:07.647127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:16:49.997 09:16:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.997 09:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:49.997 [2024-10-15 09:16:07.657501] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:50.930 09:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:50.930 09:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.930 09:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:50.930 09:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:50.930 09:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.930 09:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.930 09:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.930 09:16:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.930 09:16:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.930 09:16:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.930 09:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.930 "name": "raid_bdev1", 00:16:50.930 "uuid": "f3e8e767-54e1-495b-a1c4-8d4de6c20761", 00:16:50.930 "strip_size_kb": 64, 00:16:50.930 "state": "online", 00:16:50.930 "raid_level": "raid5f", 00:16:50.930 "superblock": false, 00:16:50.930 "num_base_bdevs": 4, 00:16:50.930 "num_base_bdevs_discovered": 4, 00:16:50.930 "num_base_bdevs_operational": 4, 00:16:50.930 "process": { 00:16:50.930 "type": "rebuild", 00:16:50.930 "target": "spare", 00:16:50.930 "progress": { 00:16:50.930 "blocks": 19200, 00:16:50.930 "percent": 9 00:16:50.930 } 00:16:50.930 }, 00:16:50.930 "base_bdevs_list": [ 00:16:50.930 { 00:16:50.930 "name": "spare", 00:16:50.930 "uuid": "a3e80509-b0f3-592e-9d4c-63b4785529ff", 00:16:50.930 "is_configured": true, 00:16:50.930 "data_offset": 0, 00:16:50.930 "data_size": 65536 00:16:50.930 }, 00:16:50.930 { 00:16:50.930 "name": "BaseBdev2", 00:16:50.930 "uuid": "397d8c51-63dc-5330-bea7-cd61fa0b4d61", 00:16:50.930 "is_configured": true, 00:16:50.930 "data_offset": 0, 00:16:50.930 "data_size": 65536 00:16:50.930 }, 00:16:50.930 { 00:16:50.930 "name": "BaseBdev3", 00:16:50.930 "uuid": "e33b6668-0977-5fbb-827a-2c454ba3ceca", 00:16:50.930 "is_configured": true, 00:16:50.931 "data_offset": 0, 00:16:50.931 "data_size": 65536 00:16:50.931 }, 00:16:50.931 { 00:16:50.931 "name": "BaseBdev4", 00:16:50.931 "uuid": "cd983016-cbf0-56c2-a7ef-9424fba7b53e", 00:16:50.931 "is_configured": true, 00:16:50.931 "data_offset": 0, 00:16:50.931 "data_size": 65536 00:16:50.931 } 00:16:50.931 ] 00:16:50.931 }' 00:16:50.931 09:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.931 09:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:50.931 09:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.931 09:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:50.931 09:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:50.931 09:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:50.931 09:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:50.931 09:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=652 00:16:50.931 09:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:50.931 09:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:50.931 09:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.931 09:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:50.931 09:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:50.931 09:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.931 09:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.931 09:16:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.931 09:16:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.931 09:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.931 09:16:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.189 09:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.189 "name": "raid_bdev1", 00:16:51.189 "uuid": "f3e8e767-54e1-495b-a1c4-8d4de6c20761", 00:16:51.189 "strip_size_kb": 64, 00:16:51.189 "state": "online", 00:16:51.189 "raid_level": "raid5f", 00:16:51.189 "superblock": false, 00:16:51.189 "num_base_bdevs": 4, 00:16:51.189 "num_base_bdevs_discovered": 4, 00:16:51.189 "num_base_bdevs_operational": 4, 00:16:51.189 "process": { 00:16:51.189 "type": "rebuild", 00:16:51.189 "target": "spare", 00:16:51.189 "progress": { 00:16:51.189 "blocks": 21120, 00:16:51.189 "percent": 10 00:16:51.189 } 00:16:51.189 }, 00:16:51.189 "base_bdevs_list": [ 00:16:51.189 { 00:16:51.189 "name": "spare", 00:16:51.189 "uuid": "a3e80509-b0f3-592e-9d4c-63b4785529ff", 00:16:51.189 "is_configured": true, 00:16:51.189 "data_offset": 0, 00:16:51.189 "data_size": 65536 00:16:51.189 }, 00:16:51.189 { 00:16:51.189 "name": "BaseBdev2", 00:16:51.189 "uuid": "397d8c51-63dc-5330-bea7-cd61fa0b4d61", 00:16:51.189 "is_configured": true, 00:16:51.189 "data_offset": 0, 00:16:51.189 "data_size": 65536 00:16:51.189 }, 00:16:51.189 { 00:16:51.189 "name": "BaseBdev3", 00:16:51.189 "uuid": "e33b6668-0977-5fbb-827a-2c454ba3ceca", 00:16:51.189 "is_configured": true, 00:16:51.189 "data_offset": 0, 00:16:51.189 "data_size": 65536 00:16:51.189 }, 00:16:51.189 { 00:16:51.189 "name": "BaseBdev4", 00:16:51.189 "uuid": "cd983016-cbf0-56c2-a7ef-9424fba7b53e", 00:16:51.189 "is_configured": true, 00:16:51.189 "data_offset": 0, 00:16:51.189 "data_size": 65536 00:16:51.189 } 00:16:51.189 ] 00:16:51.189 }' 00:16:51.189 09:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.189 09:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:51.189 09:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.189 09:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:51.189 09:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:52.122 09:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:52.122 09:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:52.122 09:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.122 09:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:52.122 09:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:52.122 09:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.122 09:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.122 09:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.122 09:16:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.122 09:16:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.122 09:16:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.122 09:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.122 "name": "raid_bdev1", 00:16:52.122 "uuid": "f3e8e767-54e1-495b-a1c4-8d4de6c20761", 00:16:52.122 "strip_size_kb": 64, 00:16:52.122 "state": "online", 00:16:52.122 "raid_level": "raid5f", 00:16:52.122 "superblock": false, 00:16:52.122 "num_base_bdevs": 4, 00:16:52.122 "num_base_bdevs_discovered": 4, 00:16:52.122 "num_base_bdevs_operational": 4, 00:16:52.122 "process": { 00:16:52.122 "type": "rebuild", 00:16:52.122 "target": "spare", 00:16:52.122 "progress": { 00:16:52.122 "blocks": 42240, 00:16:52.122 "percent": 21 00:16:52.122 } 00:16:52.122 }, 00:16:52.122 "base_bdevs_list": [ 00:16:52.122 { 00:16:52.122 "name": "spare", 00:16:52.122 "uuid": "a3e80509-b0f3-592e-9d4c-63b4785529ff", 00:16:52.122 "is_configured": true, 00:16:52.122 "data_offset": 0, 00:16:52.122 "data_size": 65536 00:16:52.122 }, 00:16:52.122 { 00:16:52.122 "name": "BaseBdev2", 00:16:52.122 "uuid": "397d8c51-63dc-5330-bea7-cd61fa0b4d61", 00:16:52.122 "is_configured": true, 00:16:52.122 "data_offset": 0, 00:16:52.122 "data_size": 65536 00:16:52.122 }, 00:16:52.122 { 00:16:52.122 "name": "BaseBdev3", 00:16:52.122 "uuid": "e33b6668-0977-5fbb-827a-2c454ba3ceca", 00:16:52.122 "is_configured": true, 00:16:52.122 "data_offset": 0, 00:16:52.122 "data_size": 65536 00:16:52.122 }, 00:16:52.122 { 00:16:52.122 "name": "BaseBdev4", 00:16:52.122 "uuid": "cd983016-cbf0-56c2-a7ef-9424fba7b53e", 00:16:52.123 "is_configured": true, 00:16:52.123 "data_offset": 0, 00:16:52.123 "data_size": 65536 00:16:52.123 } 00:16:52.123 ] 00:16:52.123 }' 00:16:52.123 09:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.381 09:16:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:52.381 09:16:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.381 09:16:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:52.381 09:16:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:53.316 09:16:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:53.316 09:16:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:53.316 09:16:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.316 09:16:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:53.316 09:16:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:53.316 09:16:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.316 09:16:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.316 09:16:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.316 09:16:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.316 09:16:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.316 09:16:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.316 09:16:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.316 "name": "raid_bdev1", 00:16:53.316 "uuid": "f3e8e767-54e1-495b-a1c4-8d4de6c20761", 00:16:53.316 "strip_size_kb": 64, 00:16:53.316 "state": "online", 00:16:53.316 "raid_level": "raid5f", 00:16:53.316 "superblock": false, 00:16:53.316 "num_base_bdevs": 4, 00:16:53.316 "num_base_bdevs_discovered": 4, 00:16:53.316 "num_base_bdevs_operational": 4, 00:16:53.316 "process": { 00:16:53.316 "type": "rebuild", 00:16:53.316 "target": "spare", 00:16:53.316 "progress": { 00:16:53.316 "blocks": 65280, 00:16:53.316 "percent": 33 00:16:53.316 } 00:16:53.316 }, 00:16:53.316 "base_bdevs_list": [ 00:16:53.316 { 00:16:53.316 "name": "spare", 00:16:53.316 "uuid": "a3e80509-b0f3-592e-9d4c-63b4785529ff", 00:16:53.316 "is_configured": true, 00:16:53.316 "data_offset": 0, 00:16:53.316 "data_size": 65536 00:16:53.316 }, 00:16:53.316 { 00:16:53.316 "name": "BaseBdev2", 00:16:53.316 "uuid": "397d8c51-63dc-5330-bea7-cd61fa0b4d61", 00:16:53.316 "is_configured": true, 00:16:53.316 "data_offset": 0, 00:16:53.316 "data_size": 65536 00:16:53.316 }, 00:16:53.316 { 00:16:53.316 "name": "BaseBdev3", 00:16:53.316 "uuid": "e33b6668-0977-5fbb-827a-2c454ba3ceca", 00:16:53.316 "is_configured": true, 00:16:53.316 "data_offset": 0, 00:16:53.316 "data_size": 65536 00:16:53.316 }, 00:16:53.316 { 00:16:53.316 "name": "BaseBdev4", 00:16:53.316 "uuid": "cd983016-cbf0-56c2-a7ef-9424fba7b53e", 00:16:53.316 "is_configured": true, 00:16:53.316 "data_offset": 0, 00:16:53.316 "data_size": 65536 00:16:53.316 } 00:16:53.316 ] 00:16:53.316 }' 00:16:53.316 09:16:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.316 09:16:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:53.316 09:16:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.316 09:16:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:53.316 09:16:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:54.697 09:16:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:54.697 09:16:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:54.697 09:16:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.697 09:16:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:54.697 09:16:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:54.697 09:16:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.697 09:16:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.697 09:16:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.697 09:16:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.697 09:16:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.697 09:16:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.697 09:16:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.697 "name": "raid_bdev1", 00:16:54.697 "uuid": "f3e8e767-54e1-495b-a1c4-8d4de6c20761", 00:16:54.697 "strip_size_kb": 64, 00:16:54.697 "state": "online", 00:16:54.697 "raid_level": "raid5f", 00:16:54.697 "superblock": false, 00:16:54.697 "num_base_bdevs": 4, 00:16:54.697 "num_base_bdevs_discovered": 4, 00:16:54.697 "num_base_bdevs_operational": 4, 00:16:54.697 "process": { 00:16:54.697 "type": "rebuild", 00:16:54.697 "target": "spare", 00:16:54.697 "progress": { 00:16:54.697 "blocks": 86400, 00:16:54.697 "percent": 43 00:16:54.697 } 00:16:54.697 }, 00:16:54.697 "base_bdevs_list": [ 00:16:54.697 { 00:16:54.697 "name": "spare", 00:16:54.697 "uuid": "a3e80509-b0f3-592e-9d4c-63b4785529ff", 00:16:54.697 "is_configured": true, 00:16:54.697 "data_offset": 0, 00:16:54.697 "data_size": 65536 00:16:54.697 }, 00:16:54.697 { 00:16:54.697 "name": "BaseBdev2", 00:16:54.697 "uuid": "397d8c51-63dc-5330-bea7-cd61fa0b4d61", 00:16:54.697 "is_configured": true, 00:16:54.697 "data_offset": 0, 00:16:54.697 "data_size": 65536 00:16:54.697 }, 00:16:54.697 { 00:16:54.697 "name": "BaseBdev3", 00:16:54.697 "uuid": "e33b6668-0977-5fbb-827a-2c454ba3ceca", 00:16:54.697 "is_configured": true, 00:16:54.697 "data_offset": 0, 00:16:54.697 "data_size": 65536 00:16:54.697 }, 00:16:54.697 { 00:16:54.697 "name": "BaseBdev4", 00:16:54.697 "uuid": "cd983016-cbf0-56c2-a7ef-9424fba7b53e", 00:16:54.697 "is_configured": true, 00:16:54.697 "data_offset": 0, 00:16:54.697 "data_size": 65536 00:16:54.697 } 00:16:54.697 ] 00:16:54.697 }' 00:16:54.697 09:16:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.697 09:16:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:54.697 09:16:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.697 09:16:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:54.697 09:16:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:55.638 09:16:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:55.638 09:16:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.638 09:16:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.638 09:16:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.638 09:16:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.638 09:16:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.638 09:16:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.638 09:16:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.638 09:16:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.638 09:16:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.638 09:16:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.638 09:16:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.638 "name": "raid_bdev1", 00:16:55.638 "uuid": "f3e8e767-54e1-495b-a1c4-8d4de6c20761", 00:16:55.638 "strip_size_kb": 64, 00:16:55.638 "state": "online", 00:16:55.638 "raid_level": "raid5f", 00:16:55.638 "superblock": false, 00:16:55.638 "num_base_bdevs": 4, 00:16:55.638 "num_base_bdevs_discovered": 4, 00:16:55.638 "num_base_bdevs_operational": 4, 00:16:55.638 "process": { 00:16:55.638 "type": "rebuild", 00:16:55.638 "target": "spare", 00:16:55.638 "progress": { 00:16:55.638 "blocks": 107520, 00:16:55.638 "percent": 54 00:16:55.638 } 00:16:55.638 }, 00:16:55.638 "base_bdevs_list": [ 00:16:55.638 { 00:16:55.638 "name": "spare", 00:16:55.638 "uuid": "a3e80509-b0f3-592e-9d4c-63b4785529ff", 00:16:55.638 "is_configured": true, 00:16:55.638 "data_offset": 0, 00:16:55.638 "data_size": 65536 00:16:55.638 }, 00:16:55.638 { 00:16:55.638 "name": "BaseBdev2", 00:16:55.638 "uuid": "397d8c51-63dc-5330-bea7-cd61fa0b4d61", 00:16:55.638 "is_configured": true, 00:16:55.638 "data_offset": 0, 00:16:55.638 "data_size": 65536 00:16:55.638 }, 00:16:55.638 { 00:16:55.638 "name": "BaseBdev3", 00:16:55.638 "uuid": "e33b6668-0977-5fbb-827a-2c454ba3ceca", 00:16:55.638 "is_configured": true, 00:16:55.638 "data_offset": 0, 00:16:55.638 "data_size": 65536 00:16:55.638 }, 00:16:55.638 { 00:16:55.638 "name": "BaseBdev4", 00:16:55.638 "uuid": "cd983016-cbf0-56c2-a7ef-9424fba7b53e", 00:16:55.638 "is_configured": true, 00:16:55.638 "data_offset": 0, 00:16:55.638 "data_size": 65536 00:16:55.638 } 00:16:55.638 ] 00:16:55.638 }' 00:16:55.638 09:16:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.638 09:16:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:55.638 09:16:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.638 09:16:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.638 09:16:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:57.019 09:16:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:57.019 09:16:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.019 09:16:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.019 09:16:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.019 09:16:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.019 09:16:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.019 09:16:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.019 09:16:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.019 09:16:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.019 09:16:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.019 09:16:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.019 09:16:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.019 "name": "raid_bdev1", 00:16:57.019 "uuid": "f3e8e767-54e1-495b-a1c4-8d4de6c20761", 00:16:57.019 "strip_size_kb": 64, 00:16:57.019 "state": "online", 00:16:57.019 "raid_level": "raid5f", 00:16:57.019 "superblock": false, 00:16:57.019 "num_base_bdevs": 4, 00:16:57.019 "num_base_bdevs_discovered": 4, 00:16:57.019 "num_base_bdevs_operational": 4, 00:16:57.019 "process": { 00:16:57.019 "type": "rebuild", 00:16:57.019 "target": "spare", 00:16:57.019 "progress": { 00:16:57.019 "blocks": 128640, 00:16:57.019 "percent": 65 00:16:57.019 } 00:16:57.019 }, 00:16:57.019 "base_bdevs_list": [ 00:16:57.019 { 00:16:57.019 "name": "spare", 00:16:57.019 "uuid": "a3e80509-b0f3-592e-9d4c-63b4785529ff", 00:16:57.019 "is_configured": true, 00:16:57.019 "data_offset": 0, 00:16:57.019 "data_size": 65536 00:16:57.019 }, 00:16:57.019 { 00:16:57.019 "name": "BaseBdev2", 00:16:57.019 "uuid": "397d8c51-63dc-5330-bea7-cd61fa0b4d61", 00:16:57.019 "is_configured": true, 00:16:57.019 "data_offset": 0, 00:16:57.019 "data_size": 65536 00:16:57.019 }, 00:16:57.019 { 00:16:57.019 "name": "BaseBdev3", 00:16:57.019 "uuid": "e33b6668-0977-5fbb-827a-2c454ba3ceca", 00:16:57.019 "is_configured": true, 00:16:57.019 "data_offset": 0, 00:16:57.019 "data_size": 65536 00:16:57.019 }, 00:16:57.019 { 00:16:57.019 "name": "BaseBdev4", 00:16:57.019 "uuid": "cd983016-cbf0-56c2-a7ef-9424fba7b53e", 00:16:57.019 "is_configured": true, 00:16:57.019 "data_offset": 0, 00:16:57.019 "data_size": 65536 00:16:57.019 } 00:16:57.019 ] 00:16:57.019 }' 00:16:57.019 09:16:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.019 09:16:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.019 09:16:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.019 09:16:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.019 09:16:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:57.957 09:16:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:57.957 09:16:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.957 09:16:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.957 09:16:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.957 09:16:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.957 09:16:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.957 09:16:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.957 09:16:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.957 09:16:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.957 09:16:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.957 09:16:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.957 09:16:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.957 "name": "raid_bdev1", 00:16:57.957 "uuid": "f3e8e767-54e1-495b-a1c4-8d4de6c20761", 00:16:57.957 "strip_size_kb": 64, 00:16:57.957 "state": "online", 00:16:57.957 "raid_level": "raid5f", 00:16:57.957 "superblock": false, 00:16:57.957 "num_base_bdevs": 4, 00:16:57.957 "num_base_bdevs_discovered": 4, 00:16:57.957 "num_base_bdevs_operational": 4, 00:16:57.957 "process": { 00:16:57.957 "type": "rebuild", 00:16:57.957 "target": "spare", 00:16:57.957 "progress": { 00:16:57.957 "blocks": 151680, 00:16:57.957 "percent": 77 00:16:57.957 } 00:16:57.957 }, 00:16:57.957 "base_bdevs_list": [ 00:16:57.957 { 00:16:57.957 "name": "spare", 00:16:57.957 "uuid": "a3e80509-b0f3-592e-9d4c-63b4785529ff", 00:16:57.957 "is_configured": true, 00:16:57.957 "data_offset": 0, 00:16:57.957 "data_size": 65536 00:16:57.957 }, 00:16:57.957 { 00:16:57.957 "name": "BaseBdev2", 00:16:57.957 "uuid": "397d8c51-63dc-5330-bea7-cd61fa0b4d61", 00:16:57.957 "is_configured": true, 00:16:57.957 "data_offset": 0, 00:16:57.957 "data_size": 65536 00:16:57.957 }, 00:16:57.957 { 00:16:57.957 "name": "BaseBdev3", 00:16:57.957 "uuid": "e33b6668-0977-5fbb-827a-2c454ba3ceca", 00:16:57.957 "is_configured": true, 00:16:57.957 "data_offset": 0, 00:16:57.957 "data_size": 65536 00:16:57.957 }, 00:16:57.957 { 00:16:57.957 "name": "BaseBdev4", 00:16:57.957 "uuid": "cd983016-cbf0-56c2-a7ef-9424fba7b53e", 00:16:57.957 "is_configured": true, 00:16:57.957 "data_offset": 0, 00:16:57.957 "data_size": 65536 00:16:57.957 } 00:16:57.957 ] 00:16:57.957 }' 00:16:57.957 09:16:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.957 09:16:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.957 09:16:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.957 09:16:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.957 09:16:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:58.895 09:16:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:58.895 09:16:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:58.895 09:16:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.895 09:16:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:58.895 09:16:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:58.895 09:16:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.895 09:16:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.895 09:16:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.895 09:16:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.895 09:16:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.155 09:16:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.155 09:16:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.155 "name": "raid_bdev1", 00:16:59.155 "uuid": "f3e8e767-54e1-495b-a1c4-8d4de6c20761", 00:16:59.155 "strip_size_kb": 64, 00:16:59.155 "state": "online", 00:16:59.155 "raid_level": "raid5f", 00:16:59.155 "superblock": false, 00:16:59.155 "num_base_bdevs": 4, 00:16:59.155 "num_base_bdevs_discovered": 4, 00:16:59.155 "num_base_bdevs_operational": 4, 00:16:59.155 "process": { 00:16:59.155 "type": "rebuild", 00:16:59.155 "target": "spare", 00:16:59.155 "progress": { 00:16:59.155 "blocks": 172800, 00:16:59.155 "percent": 87 00:16:59.155 } 00:16:59.155 }, 00:16:59.155 "base_bdevs_list": [ 00:16:59.155 { 00:16:59.155 "name": "spare", 00:16:59.155 "uuid": "a3e80509-b0f3-592e-9d4c-63b4785529ff", 00:16:59.155 "is_configured": true, 00:16:59.155 "data_offset": 0, 00:16:59.155 "data_size": 65536 00:16:59.155 }, 00:16:59.155 { 00:16:59.155 "name": "BaseBdev2", 00:16:59.155 "uuid": "397d8c51-63dc-5330-bea7-cd61fa0b4d61", 00:16:59.155 "is_configured": true, 00:16:59.155 "data_offset": 0, 00:16:59.155 "data_size": 65536 00:16:59.155 }, 00:16:59.155 { 00:16:59.155 "name": "BaseBdev3", 00:16:59.155 "uuid": "e33b6668-0977-5fbb-827a-2c454ba3ceca", 00:16:59.155 "is_configured": true, 00:16:59.155 "data_offset": 0, 00:16:59.155 "data_size": 65536 00:16:59.155 }, 00:16:59.155 { 00:16:59.155 "name": "BaseBdev4", 00:16:59.155 "uuid": "cd983016-cbf0-56c2-a7ef-9424fba7b53e", 00:16:59.155 "is_configured": true, 00:16:59.155 "data_offset": 0, 00:16:59.155 "data_size": 65536 00:16:59.155 } 00:16:59.155 ] 00:16:59.155 }' 00:16:59.155 09:16:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.155 09:16:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.155 09:16:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.155 09:16:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.155 09:16:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:00.093 09:16:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:00.093 09:16:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:00.093 09:16:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.093 09:16:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:00.093 09:16:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:00.093 09:16:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.093 09:16:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.093 09:16:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.093 09:16:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.093 09:16:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.093 09:16:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.093 09:16:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.093 "name": "raid_bdev1", 00:17:00.093 "uuid": "f3e8e767-54e1-495b-a1c4-8d4de6c20761", 00:17:00.093 "strip_size_kb": 64, 00:17:00.093 "state": "online", 00:17:00.093 "raid_level": "raid5f", 00:17:00.093 "superblock": false, 00:17:00.093 "num_base_bdevs": 4, 00:17:00.093 "num_base_bdevs_discovered": 4, 00:17:00.093 "num_base_bdevs_operational": 4, 00:17:00.093 "process": { 00:17:00.093 "type": "rebuild", 00:17:00.093 "target": "spare", 00:17:00.093 "progress": { 00:17:00.093 "blocks": 193920, 00:17:00.093 "percent": 98 00:17:00.093 } 00:17:00.093 }, 00:17:00.093 "base_bdevs_list": [ 00:17:00.093 { 00:17:00.093 "name": "spare", 00:17:00.093 "uuid": "a3e80509-b0f3-592e-9d4c-63b4785529ff", 00:17:00.093 "is_configured": true, 00:17:00.093 "data_offset": 0, 00:17:00.093 "data_size": 65536 00:17:00.093 }, 00:17:00.093 { 00:17:00.093 "name": "BaseBdev2", 00:17:00.093 "uuid": "397d8c51-63dc-5330-bea7-cd61fa0b4d61", 00:17:00.094 "is_configured": true, 00:17:00.094 "data_offset": 0, 00:17:00.094 "data_size": 65536 00:17:00.094 }, 00:17:00.094 { 00:17:00.094 "name": "BaseBdev3", 00:17:00.094 "uuid": "e33b6668-0977-5fbb-827a-2c454ba3ceca", 00:17:00.094 "is_configured": true, 00:17:00.094 "data_offset": 0, 00:17:00.094 "data_size": 65536 00:17:00.094 }, 00:17:00.094 { 00:17:00.094 "name": "BaseBdev4", 00:17:00.094 "uuid": "cd983016-cbf0-56c2-a7ef-9424fba7b53e", 00:17:00.094 "is_configured": true, 00:17:00.094 "data_offset": 0, 00:17:00.094 "data_size": 65536 00:17:00.094 } 00:17:00.094 ] 00:17:00.094 }' 00:17:00.094 09:16:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.352 09:16:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:00.352 09:16:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:00.352 [2024-10-15 09:16:18.033294] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:00.352 [2024-10-15 09:16:18.033373] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:00.352 [2024-10-15 09:16:18.033419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:00.352 09:16:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:00.352 09:16:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:01.289 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:01.289 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:01.289 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.289 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:01.289 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:01.289 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.289 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.289 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.289 09:16:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.289 09:16:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.289 09:16:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.289 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.289 "name": "raid_bdev1", 00:17:01.289 "uuid": "f3e8e767-54e1-495b-a1c4-8d4de6c20761", 00:17:01.289 "strip_size_kb": 64, 00:17:01.289 "state": "online", 00:17:01.289 "raid_level": "raid5f", 00:17:01.289 "superblock": false, 00:17:01.289 "num_base_bdevs": 4, 00:17:01.289 "num_base_bdevs_discovered": 4, 00:17:01.289 "num_base_bdevs_operational": 4, 00:17:01.289 "base_bdevs_list": [ 00:17:01.289 { 00:17:01.289 "name": "spare", 00:17:01.289 "uuid": "a3e80509-b0f3-592e-9d4c-63b4785529ff", 00:17:01.289 "is_configured": true, 00:17:01.289 "data_offset": 0, 00:17:01.289 "data_size": 65536 00:17:01.289 }, 00:17:01.289 { 00:17:01.289 "name": "BaseBdev2", 00:17:01.289 "uuid": "397d8c51-63dc-5330-bea7-cd61fa0b4d61", 00:17:01.289 "is_configured": true, 00:17:01.289 "data_offset": 0, 00:17:01.289 "data_size": 65536 00:17:01.289 }, 00:17:01.289 { 00:17:01.289 "name": "BaseBdev3", 00:17:01.289 "uuid": "e33b6668-0977-5fbb-827a-2c454ba3ceca", 00:17:01.289 "is_configured": true, 00:17:01.289 "data_offset": 0, 00:17:01.289 "data_size": 65536 00:17:01.289 }, 00:17:01.289 { 00:17:01.289 "name": "BaseBdev4", 00:17:01.289 "uuid": "cd983016-cbf0-56c2-a7ef-9424fba7b53e", 00:17:01.289 "is_configured": true, 00:17:01.289 "data_offset": 0, 00:17:01.289 "data_size": 65536 00:17:01.289 } 00:17:01.289 ] 00:17:01.289 }' 00:17:01.289 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.289 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:01.289 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.549 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:01.549 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:01.549 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:01.549 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.549 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:01.549 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:01.549 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.549 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.549 09:16:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.549 09:16:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.549 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.549 09:16:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.549 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.549 "name": "raid_bdev1", 00:17:01.549 "uuid": "f3e8e767-54e1-495b-a1c4-8d4de6c20761", 00:17:01.549 "strip_size_kb": 64, 00:17:01.549 "state": "online", 00:17:01.549 "raid_level": "raid5f", 00:17:01.549 "superblock": false, 00:17:01.549 "num_base_bdevs": 4, 00:17:01.549 "num_base_bdevs_discovered": 4, 00:17:01.549 "num_base_bdevs_operational": 4, 00:17:01.549 "base_bdevs_list": [ 00:17:01.549 { 00:17:01.549 "name": "spare", 00:17:01.549 "uuid": "a3e80509-b0f3-592e-9d4c-63b4785529ff", 00:17:01.549 "is_configured": true, 00:17:01.549 "data_offset": 0, 00:17:01.549 "data_size": 65536 00:17:01.549 }, 00:17:01.549 { 00:17:01.549 "name": "BaseBdev2", 00:17:01.549 "uuid": "397d8c51-63dc-5330-bea7-cd61fa0b4d61", 00:17:01.549 "is_configured": true, 00:17:01.549 "data_offset": 0, 00:17:01.549 "data_size": 65536 00:17:01.549 }, 00:17:01.549 { 00:17:01.549 "name": "BaseBdev3", 00:17:01.549 "uuid": "e33b6668-0977-5fbb-827a-2c454ba3ceca", 00:17:01.549 "is_configured": true, 00:17:01.549 "data_offset": 0, 00:17:01.549 "data_size": 65536 00:17:01.549 }, 00:17:01.549 { 00:17:01.549 "name": "BaseBdev4", 00:17:01.549 "uuid": "cd983016-cbf0-56c2-a7ef-9424fba7b53e", 00:17:01.549 "is_configured": true, 00:17:01.549 "data_offset": 0, 00:17:01.549 "data_size": 65536 00:17:01.549 } 00:17:01.549 ] 00:17:01.549 }' 00:17:01.549 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.549 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:01.549 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.549 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:01.549 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:01.549 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.549 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.549 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.549 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.549 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:01.549 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.549 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.549 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.549 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.549 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.549 09:16:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.549 09:16:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.549 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.549 09:16:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.549 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.549 "name": "raid_bdev1", 00:17:01.549 "uuid": "f3e8e767-54e1-495b-a1c4-8d4de6c20761", 00:17:01.549 "strip_size_kb": 64, 00:17:01.549 "state": "online", 00:17:01.549 "raid_level": "raid5f", 00:17:01.549 "superblock": false, 00:17:01.550 "num_base_bdevs": 4, 00:17:01.550 "num_base_bdevs_discovered": 4, 00:17:01.550 "num_base_bdevs_operational": 4, 00:17:01.550 "base_bdevs_list": [ 00:17:01.550 { 00:17:01.550 "name": "spare", 00:17:01.550 "uuid": "a3e80509-b0f3-592e-9d4c-63b4785529ff", 00:17:01.550 "is_configured": true, 00:17:01.550 "data_offset": 0, 00:17:01.550 "data_size": 65536 00:17:01.550 }, 00:17:01.550 { 00:17:01.550 "name": "BaseBdev2", 00:17:01.550 "uuid": "397d8c51-63dc-5330-bea7-cd61fa0b4d61", 00:17:01.550 "is_configured": true, 00:17:01.550 "data_offset": 0, 00:17:01.550 "data_size": 65536 00:17:01.550 }, 00:17:01.550 { 00:17:01.550 "name": "BaseBdev3", 00:17:01.550 "uuid": "e33b6668-0977-5fbb-827a-2c454ba3ceca", 00:17:01.550 "is_configured": true, 00:17:01.550 "data_offset": 0, 00:17:01.550 "data_size": 65536 00:17:01.550 }, 00:17:01.550 { 00:17:01.550 "name": "BaseBdev4", 00:17:01.550 "uuid": "cd983016-cbf0-56c2-a7ef-9424fba7b53e", 00:17:01.550 "is_configured": true, 00:17:01.550 "data_offset": 0, 00:17:01.550 "data_size": 65536 00:17:01.550 } 00:17:01.550 ] 00:17:01.550 }' 00:17:01.550 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.550 09:16:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.120 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:02.120 09:16:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.120 09:16:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.120 [2024-10-15 09:16:19.788381] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:02.120 [2024-10-15 09:16:19.788420] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:02.120 [2024-10-15 09:16:19.788516] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.120 [2024-10-15 09:16:19.788621] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.120 [2024-10-15 09:16:19.788633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:02.120 09:16:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.120 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.120 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:02.120 09:16:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.120 09:16:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.120 09:16:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.120 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:02.120 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:02.120 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:02.120 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:02.120 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:02.120 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:02.120 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:02.120 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:02.120 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:02.120 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:02.120 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:02.120 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:02.120 09:16:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:02.380 /dev/nbd0 00:17:02.380 09:16:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:02.380 09:16:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:02.380 09:16:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:02.380 09:16:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:17:02.380 09:16:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:02.380 09:16:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:02.380 09:16:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:02.380 09:16:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:17:02.380 09:16:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:02.380 09:16:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:02.380 09:16:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:02.380 1+0 records in 00:17:02.380 1+0 records out 00:17:02.380 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000632384 s, 6.5 MB/s 00:17:02.380 09:16:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.380 09:16:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:17:02.380 09:16:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.380 09:16:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:02.380 09:16:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:17:02.380 09:16:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:02.380 09:16:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:02.380 09:16:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:02.665 /dev/nbd1 00:17:02.665 09:16:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:02.665 09:16:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:02.665 09:16:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:02.665 09:16:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:17:02.665 09:16:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:02.665 09:16:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:02.665 09:16:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:02.665 09:16:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:17:02.665 09:16:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:02.665 09:16:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:02.665 09:16:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:02.665 1+0 records in 00:17:02.665 1+0 records out 00:17:02.665 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302813 s, 13.5 MB/s 00:17:02.665 09:16:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.665 09:16:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:17:02.665 09:16:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.665 09:16:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:02.665 09:16:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:17:02.665 09:16:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:02.665 09:16:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:02.665 09:16:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:02.925 09:16:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:02.925 09:16:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:02.925 09:16:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:02.925 09:16:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:02.925 09:16:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:02.925 09:16:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:02.925 09:16:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:02.925 09:16:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:02.925 09:16:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:02.925 09:16:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:02.925 09:16:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:02.925 09:16:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:02.925 09:16:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:02.925 09:16:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:02.925 09:16:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:02.925 09:16:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:02.925 09:16:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:03.184 09:16:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:03.184 09:16:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:03.184 09:16:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:03.184 09:16:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:03.184 09:16:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:03.184 09:16:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:03.184 09:16:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:03.184 09:16:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:03.184 09:16:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:03.184 09:16:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84904 00:17:03.184 09:16:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 84904 ']' 00:17:03.184 09:16:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 84904 00:17:03.184 09:16:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:17:03.184 09:16:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:03.184 09:16:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84904 00:17:03.184 09:16:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:03.184 09:16:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:03.184 killing process with pid 84904 00:17:03.184 09:16:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84904' 00:17:03.184 Received shutdown signal, test time was about 60.000000 seconds 00:17:03.184 00:17:03.184 Latency(us) 00:17:03.184 [2024-10-15T09:16:21.080Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.184 [2024-10-15T09:16:21.081Z] =================================================================================================================== 00:17:03.185 [2024-10-15T09:16:21.081Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:03.185 09:16:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 84904 00:17:03.185 09:16:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 84904 00:17:03.185 [2024-10-15 09:16:21.080997] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:03.754 [2024-10-15 09:16:21.607178] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:05.134 09:16:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:05.134 00:17:05.134 real 0m20.349s 00:17:05.134 user 0m24.233s 00:17:05.134 sys 0m2.311s 00:17:05.134 09:16:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:05.134 09:16:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.134 ************************************ 00:17:05.134 END TEST raid5f_rebuild_test 00:17:05.134 ************************************ 00:17:05.134 09:16:22 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:17:05.134 09:16:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:05.134 09:16:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:05.134 09:16:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:05.134 ************************************ 00:17:05.134 START TEST raid5f_rebuild_test_sb 00:17:05.134 ************************************ 00:17:05.134 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:17:05.134 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:05.134 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:05.134 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:05.134 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:05.134 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:05.134 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:05.134 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:05.134 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:05.134 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:05.134 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:05.134 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:05.134 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:05.134 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:05.134 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:05.134 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:05.134 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:05.134 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:05.134 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:05.134 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:05.134 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:05.134 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:05.134 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:05.134 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:05.135 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:05.135 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:05.135 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:05.135 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:05.135 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:05.135 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:05.135 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:05.135 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:05.135 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:05.135 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85426 00:17:05.135 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85426 00:17:05.135 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:05.135 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 85426 ']' 00:17:05.135 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.135 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:05.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.135 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.135 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:05.135 09:16:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.135 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:05.135 Zero copy mechanism will not be used. 00:17:05.135 [2024-10-15 09:16:22.947905] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:17:05.135 [2024-10-15 09:16:22.948023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85426 ] 00:17:05.394 [2024-10-15 09:16:23.113100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.394 [2024-10-15 09:16:23.240370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.654 [2024-10-15 09:16:23.443093] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:05.654 [2024-10-15 09:16:23.443131] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:05.913 09:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:05.913 09:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:17:05.913 09:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:05.913 09:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:05.913 09:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.913 09:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.174 BaseBdev1_malloc 00:17:06.174 09:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.174 09:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:06.174 09:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.174 09:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.174 [2024-10-15 09:16:23.854481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:06.174 [2024-10-15 09:16:23.854556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.174 [2024-10-15 09:16:23.854585] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:06.174 [2024-10-15 09:16:23.854598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.174 [2024-10-15 09:16:23.856966] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.174 [2024-10-15 09:16:23.857007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:06.174 BaseBdev1 00:17:06.174 09:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.174 09:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:06.174 09:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:06.174 09:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.174 09:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.174 BaseBdev2_malloc 00:17:06.174 09:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.174 09:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:06.174 09:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.174 09:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.174 [2024-10-15 09:16:23.910330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:06.174 [2024-10-15 09:16:23.910399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.174 [2024-10-15 09:16:23.910423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:06.174 [2024-10-15 09:16:23.910435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.174 [2024-10-15 09:16:23.912664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.174 [2024-10-15 09:16:23.912733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:06.174 BaseBdev2 00:17:06.174 09:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.174 09:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:06.174 09:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:06.174 09:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.174 09:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.174 BaseBdev3_malloc 00:17:06.174 09:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.174 09:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:06.174 09:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.174 09:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.174 [2024-10-15 09:16:23.979921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:06.174 [2024-10-15 09:16:23.979987] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.174 [2024-10-15 09:16:23.980014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:06.174 [2024-10-15 09:16:23.980026] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.174 [2024-10-15 09:16:23.982437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.174 [2024-10-15 09:16:23.982486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:06.174 BaseBdev3 00:17:06.174 09:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.174 09:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:06.174 09:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:06.174 09:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.174 09:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.174 BaseBdev4_malloc 00:17:06.174 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.174 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:06.174 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.174 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.174 [2024-10-15 09:16:24.037574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:06.174 [2024-10-15 09:16:24.037675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.174 [2024-10-15 09:16:24.037720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:06.174 [2024-10-15 09:16:24.037733] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.174 [2024-10-15 09:16:24.040144] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.174 [2024-10-15 09:16:24.040187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:06.174 BaseBdev4 00:17:06.174 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.174 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:06.174 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.174 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.434 spare_malloc 00:17:06.434 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.434 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:06.434 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.434 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.434 spare_delay 00:17:06.434 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.434 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:06.434 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.434 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.434 [2024-10-15 09:16:24.106125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:06.434 [2024-10-15 09:16:24.106194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.434 [2024-10-15 09:16:24.106220] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:06.434 [2024-10-15 09:16:24.106232] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.434 [2024-10-15 09:16:24.108629] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.434 [2024-10-15 09:16:24.108694] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:06.434 spare 00:17:06.434 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.434 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:06.434 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.434 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.434 [2024-10-15 09:16:24.118178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:06.434 [2024-10-15 09:16:24.120249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:06.434 [2024-10-15 09:16:24.120322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:06.434 [2024-10-15 09:16:24.120374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:06.434 [2024-10-15 09:16:24.120604] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:06.434 [2024-10-15 09:16:24.120634] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:06.434 [2024-10-15 09:16:24.120946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:06.434 [2024-10-15 09:16:24.129421] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:06.434 [2024-10-15 09:16:24.129448] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:06.434 [2024-10-15 09:16:24.129729] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.434 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.434 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:06.434 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.434 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.434 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.434 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.434 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:06.434 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.434 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.434 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.434 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.434 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.434 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.434 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.434 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.434 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.434 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.434 "name": "raid_bdev1", 00:17:06.434 "uuid": "efe2f3fb-f993-42bb-9522-fec059ea6eb7", 00:17:06.434 "strip_size_kb": 64, 00:17:06.434 "state": "online", 00:17:06.434 "raid_level": "raid5f", 00:17:06.434 "superblock": true, 00:17:06.434 "num_base_bdevs": 4, 00:17:06.434 "num_base_bdevs_discovered": 4, 00:17:06.434 "num_base_bdevs_operational": 4, 00:17:06.434 "base_bdevs_list": [ 00:17:06.434 { 00:17:06.434 "name": "BaseBdev1", 00:17:06.434 "uuid": "44748df8-82e8-5d24-be69-e77a4c018ad9", 00:17:06.434 "is_configured": true, 00:17:06.434 "data_offset": 2048, 00:17:06.434 "data_size": 63488 00:17:06.434 }, 00:17:06.434 { 00:17:06.434 "name": "BaseBdev2", 00:17:06.434 "uuid": "2182427d-4b62-5ae2-af2b-22934c4e9775", 00:17:06.434 "is_configured": true, 00:17:06.434 "data_offset": 2048, 00:17:06.434 "data_size": 63488 00:17:06.434 }, 00:17:06.434 { 00:17:06.434 "name": "BaseBdev3", 00:17:06.434 "uuid": "2d564980-8744-58fd-8ade-575e6bf3661d", 00:17:06.434 "is_configured": true, 00:17:06.434 "data_offset": 2048, 00:17:06.434 "data_size": 63488 00:17:06.434 }, 00:17:06.434 { 00:17:06.434 "name": "BaseBdev4", 00:17:06.434 "uuid": "c42f48cc-8232-5244-a2f9-3fbf7f549556", 00:17:06.434 "is_configured": true, 00:17:06.434 "data_offset": 2048, 00:17:06.435 "data_size": 63488 00:17:06.435 } 00:17:06.435 ] 00:17:06.435 }' 00:17:06.435 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.435 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.694 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:06.694 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:06.694 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.694 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.954 [2024-10-15 09:16:24.594715] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:06.954 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.954 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:17:06.954 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.954 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.954 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:06.954 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.954 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.954 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:06.954 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:06.954 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:06.954 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:06.954 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:06.954 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:06.954 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:06.954 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:06.954 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:06.954 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:06.954 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:06.954 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:06.954 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:06.954 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:07.214 [2024-10-15 09:16:24.906030] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:07.214 /dev/nbd0 00:17:07.214 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:07.214 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:07.214 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:07.214 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:17:07.214 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:07.214 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:07.214 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:07.214 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:17:07.214 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:07.214 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:07.214 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:07.214 1+0 records in 00:17:07.214 1+0 records out 00:17:07.214 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357934 s, 11.4 MB/s 00:17:07.214 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.214 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:17:07.214 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.214 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:07.214 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:17:07.214 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:07.214 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:07.214 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:07.214 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:07.214 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:07.214 09:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:17:07.784 496+0 records in 00:17:07.784 496+0 records out 00:17:07.784 97517568 bytes (98 MB, 93 MiB) copied, 0.505471 s, 193 MB/s 00:17:07.784 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:07.784 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:07.784 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:07.784 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:07.784 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:07.784 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:07.784 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:08.044 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:08.044 [2024-10-15 09:16:25.735039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.044 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:08.044 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:08.044 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:08.044 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:08.044 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:08.044 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:08.044 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:08.044 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:08.044 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.044 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.044 [2024-10-15 09:16:25.754272] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:08.044 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.044 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:08.044 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.044 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.044 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.044 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.044 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:08.044 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.044 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.044 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.044 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.044 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.044 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.044 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.044 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.044 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.044 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.044 "name": "raid_bdev1", 00:17:08.044 "uuid": "efe2f3fb-f993-42bb-9522-fec059ea6eb7", 00:17:08.044 "strip_size_kb": 64, 00:17:08.044 "state": "online", 00:17:08.044 "raid_level": "raid5f", 00:17:08.044 "superblock": true, 00:17:08.044 "num_base_bdevs": 4, 00:17:08.044 "num_base_bdevs_discovered": 3, 00:17:08.044 "num_base_bdevs_operational": 3, 00:17:08.044 "base_bdevs_list": [ 00:17:08.044 { 00:17:08.044 "name": null, 00:17:08.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.044 "is_configured": false, 00:17:08.044 "data_offset": 0, 00:17:08.044 "data_size": 63488 00:17:08.044 }, 00:17:08.044 { 00:17:08.044 "name": "BaseBdev2", 00:17:08.044 "uuid": "2182427d-4b62-5ae2-af2b-22934c4e9775", 00:17:08.044 "is_configured": true, 00:17:08.044 "data_offset": 2048, 00:17:08.044 "data_size": 63488 00:17:08.044 }, 00:17:08.044 { 00:17:08.044 "name": "BaseBdev3", 00:17:08.044 "uuid": "2d564980-8744-58fd-8ade-575e6bf3661d", 00:17:08.044 "is_configured": true, 00:17:08.044 "data_offset": 2048, 00:17:08.044 "data_size": 63488 00:17:08.044 }, 00:17:08.044 { 00:17:08.044 "name": "BaseBdev4", 00:17:08.044 "uuid": "c42f48cc-8232-5244-a2f9-3fbf7f549556", 00:17:08.044 "is_configured": true, 00:17:08.044 "data_offset": 2048, 00:17:08.044 "data_size": 63488 00:17:08.044 } 00:17:08.044 ] 00:17:08.044 }' 00:17:08.044 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.044 09:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.615 09:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:08.615 09:16:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.615 09:16:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.615 [2024-10-15 09:16:26.233766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:08.615 [2024-10-15 09:16:26.251528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:17:08.615 09:16:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.615 09:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:08.615 [2024-10-15 09:16:26.262270] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:09.554 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.554 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.554 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.554 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.554 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.554 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.554 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.554 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.554 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.554 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.554 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.554 "name": "raid_bdev1", 00:17:09.554 "uuid": "efe2f3fb-f993-42bb-9522-fec059ea6eb7", 00:17:09.554 "strip_size_kb": 64, 00:17:09.554 "state": "online", 00:17:09.554 "raid_level": "raid5f", 00:17:09.554 "superblock": true, 00:17:09.554 "num_base_bdevs": 4, 00:17:09.554 "num_base_bdevs_discovered": 4, 00:17:09.554 "num_base_bdevs_operational": 4, 00:17:09.554 "process": { 00:17:09.554 "type": "rebuild", 00:17:09.554 "target": "spare", 00:17:09.554 "progress": { 00:17:09.554 "blocks": 17280, 00:17:09.554 "percent": 9 00:17:09.554 } 00:17:09.554 }, 00:17:09.554 "base_bdevs_list": [ 00:17:09.554 { 00:17:09.554 "name": "spare", 00:17:09.554 "uuid": "bec3d8b6-2aec-5364-bb3f-20533c615cd9", 00:17:09.554 "is_configured": true, 00:17:09.554 "data_offset": 2048, 00:17:09.554 "data_size": 63488 00:17:09.554 }, 00:17:09.554 { 00:17:09.554 "name": "BaseBdev2", 00:17:09.554 "uuid": "2182427d-4b62-5ae2-af2b-22934c4e9775", 00:17:09.554 "is_configured": true, 00:17:09.554 "data_offset": 2048, 00:17:09.554 "data_size": 63488 00:17:09.554 }, 00:17:09.554 { 00:17:09.554 "name": "BaseBdev3", 00:17:09.554 "uuid": "2d564980-8744-58fd-8ade-575e6bf3661d", 00:17:09.554 "is_configured": true, 00:17:09.554 "data_offset": 2048, 00:17:09.554 "data_size": 63488 00:17:09.554 }, 00:17:09.554 { 00:17:09.554 "name": "BaseBdev4", 00:17:09.554 "uuid": "c42f48cc-8232-5244-a2f9-3fbf7f549556", 00:17:09.554 "is_configured": true, 00:17:09.554 "data_offset": 2048, 00:17:09.554 "data_size": 63488 00:17:09.554 } 00:17:09.554 ] 00:17:09.554 }' 00:17:09.554 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.554 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:09.554 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.554 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:09.554 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:09.554 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.554 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.554 [2024-10-15 09:16:27.417818] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:09.815 [2024-10-15 09:16:27.471920] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:09.815 [2024-10-15 09:16:27.472127] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.815 [2024-10-15 09:16:27.472179] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:09.815 [2024-10-15 09:16:27.472241] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:09.815 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.815 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:09.815 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.815 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.815 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:09.815 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.815 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:09.815 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.815 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.815 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.815 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.815 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.815 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.815 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.815 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.815 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.815 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.815 "name": "raid_bdev1", 00:17:09.815 "uuid": "efe2f3fb-f993-42bb-9522-fec059ea6eb7", 00:17:09.815 "strip_size_kb": 64, 00:17:09.815 "state": "online", 00:17:09.815 "raid_level": "raid5f", 00:17:09.815 "superblock": true, 00:17:09.815 "num_base_bdevs": 4, 00:17:09.815 "num_base_bdevs_discovered": 3, 00:17:09.815 "num_base_bdevs_operational": 3, 00:17:09.815 "base_bdevs_list": [ 00:17:09.815 { 00:17:09.815 "name": null, 00:17:09.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.815 "is_configured": false, 00:17:09.815 "data_offset": 0, 00:17:09.815 "data_size": 63488 00:17:09.815 }, 00:17:09.815 { 00:17:09.815 "name": "BaseBdev2", 00:17:09.815 "uuid": "2182427d-4b62-5ae2-af2b-22934c4e9775", 00:17:09.815 "is_configured": true, 00:17:09.815 "data_offset": 2048, 00:17:09.815 "data_size": 63488 00:17:09.815 }, 00:17:09.815 { 00:17:09.815 "name": "BaseBdev3", 00:17:09.815 "uuid": "2d564980-8744-58fd-8ade-575e6bf3661d", 00:17:09.815 "is_configured": true, 00:17:09.815 "data_offset": 2048, 00:17:09.815 "data_size": 63488 00:17:09.815 }, 00:17:09.815 { 00:17:09.815 "name": "BaseBdev4", 00:17:09.815 "uuid": "c42f48cc-8232-5244-a2f9-3fbf7f549556", 00:17:09.815 "is_configured": true, 00:17:09.815 "data_offset": 2048, 00:17:09.815 "data_size": 63488 00:17:09.815 } 00:17:09.815 ] 00:17:09.815 }' 00:17:09.815 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.815 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.073 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:10.073 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.073 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:10.073 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:10.073 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.073 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.073 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.073 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.073 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.073 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.329 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.329 "name": "raid_bdev1", 00:17:10.329 "uuid": "efe2f3fb-f993-42bb-9522-fec059ea6eb7", 00:17:10.329 "strip_size_kb": 64, 00:17:10.329 "state": "online", 00:17:10.329 "raid_level": "raid5f", 00:17:10.329 "superblock": true, 00:17:10.329 "num_base_bdevs": 4, 00:17:10.329 "num_base_bdevs_discovered": 3, 00:17:10.329 "num_base_bdevs_operational": 3, 00:17:10.329 "base_bdevs_list": [ 00:17:10.329 { 00:17:10.329 "name": null, 00:17:10.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.329 "is_configured": false, 00:17:10.329 "data_offset": 0, 00:17:10.329 "data_size": 63488 00:17:10.329 }, 00:17:10.329 { 00:17:10.329 "name": "BaseBdev2", 00:17:10.329 "uuid": "2182427d-4b62-5ae2-af2b-22934c4e9775", 00:17:10.329 "is_configured": true, 00:17:10.329 "data_offset": 2048, 00:17:10.329 "data_size": 63488 00:17:10.329 }, 00:17:10.329 { 00:17:10.329 "name": "BaseBdev3", 00:17:10.329 "uuid": "2d564980-8744-58fd-8ade-575e6bf3661d", 00:17:10.329 "is_configured": true, 00:17:10.329 "data_offset": 2048, 00:17:10.329 "data_size": 63488 00:17:10.329 }, 00:17:10.329 { 00:17:10.329 "name": "BaseBdev4", 00:17:10.329 "uuid": "c42f48cc-8232-5244-a2f9-3fbf7f549556", 00:17:10.329 "is_configured": true, 00:17:10.329 "data_offset": 2048, 00:17:10.329 "data_size": 63488 00:17:10.329 } 00:17:10.329 ] 00:17:10.329 }' 00:17:10.329 09:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.329 09:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:10.329 09:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.329 09:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:10.329 09:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:10.329 09:16:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.329 09:16:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.329 [2024-10-15 09:16:28.107803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:10.329 [2024-10-15 09:16:28.125700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:17:10.329 09:16:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.329 09:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:10.329 [2024-10-15 09:16:28.136972] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:11.264 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:11.265 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.265 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:11.265 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:11.265 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.265 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.265 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.265 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.265 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.265 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.524 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.524 "name": "raid_bdev1", 00:17:11.524 "uuid": "efe2f3fb-f993-42bb-9522-fec059ea6eb7", 00:17:11.524 "strip_size_kb": 64, 00:17:11.524 "state": "online", 00:17:11.524 "raid_level": "raid5f", 00:17:11.524 "superblock": true, 00:17:11.524 "num_base_bdevs": 4, 00:17:11.524 "num_base_bdevs_discovered": 4, 00:17:11.524 "num_base_bdevs_operational": 4, 00:17:11.524 "process": { 00:17:11.524 "type": "rebuild", 00:17:11.524 "target": "spare", 00:17:11.524 "progress": { 00:17:11.524 "blocks": 17280, 00:17:11.524 "percent": 9 00:17:11.524 } 00:17:11.524 }, 00:17:11.524 "base_bdevs_list": [ 00:17:11.524 { 00:17:11.524 "name": "spare", 00:17:11.524 "uuid": "bec3d8b6-2aec-5364-bb3f-20533c615cd9", 00:17:11.524 "is_configured": true, 00:17:11.524 "data_offset": 2048, 00:17:11.524 "data_size": 63488 00:17:11.524 }, 00:17:11.524 { 00:17:11.524 "name": "BaseBdev2", 00:17:11.524 "uuid": "2182427d-4b62-5ae2-af2b-22934c4e9775", 00:17:11.524 "is_configured": true, 00:17:11.524 "data_offset": 2048, 00:17:11.524 "data_size": 63488 00:17:11.524 }, 00:17:11.524 { 00:17:11.524 "name": "BaseBdev3", 00:17:11.524 "uuid": "2d564980-8744-58fd-8ade-575e6bf3661d", 00:17:11.524 "is_configured": true, 00:17:11.524 "data_offset": 2048, 00:17:11.524 "data_size": 63488 00:17:11.524 }, 00:17:11.524 { 00:17:11.524 "name": "BaseBdev4", 00:17:11.524 "uuid": "c42f48cc-8232-5244-a2f9-3fbf7f549556", 00:17:11.524 "is_configured": true, 00:17:11.524 "data_offset": 2048, 00:17:11.524 "data_size": 63488 00:17:11.524 } 00:17:11.524 ] 00:17:11.524 }' 00:17:11.524 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.524 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:11.524 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.524 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:11.524 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:11.524 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:11.524 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:11.524 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:11.524 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:11.524 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=673 00:17:11.524 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:11.524 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:11.524 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.524 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:11.524 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:11.524 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.524 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.524 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.524 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.524 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.524 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.524 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.524 "name": "raid_bdev1", 00:17:11.524 "uuid": "efe2f3fb-f993-42bb-9522-fec059ea6eb7", 00:17:11.524 "strip_size_kb": 64, 00:17:11.524 "state": "online", 00:17:11.524 "raid_level": "raid5f", 00:17:11.524 "superblock": true, 00:17:11.524 "num_base_bdevs": 4, 00:17:11.524 "num_base_bdevs_discovered": 4, 00:17:11.524 "num_base_bdevs_operational": 4, 00:17:11.524 "process": { 00:17:11.524 "type": "rebuild", 00:17:11.524 "target": "spare", 00:17:11.525 "progress": { 00:17:11.525 "blocks": 21120, 00:17:11.525 "percent": 11 00:17:11.525 } 00:17:11.525 }, 00:17:11.525 "base_bdevs_list": [ 00:17:11.525 { 00:17:11.525 "name": "spare", 00:17:11.525 "uuid": "bec3d8b6-2aec-5364-bb3f-20533c615cd9", 00:17:11.525 "is_configured": true, 00:17:11.525 "data_offset": 2048, 00:17:11.525 "data_size": 63488 00:17:11.525 }, 00:17:11.525 { 00:17:11.525 "name": "BaseBdev2", 00:17:11.525 "uuid": "2182427d-4b62-5ae2-af2b-22934c4e9775", 00:17:11.525 "is_configured": true, 00:17:11.525 "data_offset": 2048, 00:17:11.525 "data_size": 63488 00:17:11.525 }, 00:17:11.525 { 00:17:11.525 "name": "BaseBdev3", 00:17:11.525 "uuid": "2d564980-8744-58fd-8ade-575e6bf3661d", 00:17:11.525 "is_configured": true, 00:17:11.525 "data_offset": 2048, 00:17:11.525 "data_size": 63488 00:17:11.525 }, 00:17:11.525 { 00:17:11.525 "name": "BaseBdev4", 00:17:11.525 "uuid": "c42f48cc-8232-5244-a2f9-3fbf7f549556", 00:17:11.525 "is_configured": true, 00:17:11.525 "data_offset": 2048, 00:17:11.525 "data_size": 63488 00:17:11.525 } 00:17:11.525 ] 00:17:11.525 }' 00:17:11.525 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.525 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:11.525 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.525 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:11.525 09:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:12.904 09:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:12.904 09:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.904 09:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.904 09:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.904 09:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.904 09:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.904 09:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.904 09:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.904 09:16:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.904 09:16:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.904 09:16:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.904 09:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.904 "name": "raid_bdev1", 00:17:12.904 "uuid": "efe2f3fb-f993-42bb-9522-fec059ea6eb7", 00:17:12.904 "strip_size_kb": 64, 00:17:12.904 "state": "online", 00:17:12.904 "raid_level": "raid5f", 00:17:12.904 "superblock": true, 00:17:12.904 "num_base_bdevs": 4, 00:17:12.904 "num_base_bdevs_discovered": 4, 00:17:12.904 "num_base_bdevs_operational": 4, 00:17:12.904 "process": { 00:17:12.904 "type": "rebuild", 00:17:12.904 "target": "spare", 00:17:12.904 "progress": { 00:17:12.904 "blocks": 42240, 00:17:12.904 "percent": 22 00:17:12.904 } 00:17:12.904 }, 00:17:12.904 "base_bdevs_list": [ 00:17:12.904 { 00:17:12.904 "name": "spare", 00:17:12.904 "uuid": "bec3d8b6-2aec-5364-bb3f-20533c615cd9", 00:17:12.904 "is_configured": true, 00:17:12.904 "data_offset": 2048, 00:17:12.904 "data_size": 63488 00:17:12.904 }, 00:17:12.904 { 00:17:12.904 "name": "BaseBdev2", 00:17:12.904 "uuid": "2182427d-4b62-5ae2-af2b-22934c4e9775", 00:17:12.904 "is_configured": true, 00:17:12.904 "data_offset": 2048, 00:17:12.904 "data_size": 63488 00:17:12.904 }, 00:17:12.904 { 00:17:12.904 "name": "BaseBdev3", 00:17:12.904 "uuid": "2d564980-8744-58fd-8ade-575e6bf3661d", 00:17:12.904 "is_configured": true, 00:17:12.904 "data_offset": 2048, 00:17:12.904 "data_size": 63488 00:17:12.904 }, 00:17:12.904 { 00:17:12.904 "name": "BaseBdev4", 00:17:12.904 "uuid": "c42f48cc-8232-5244-a2f9-3fbf7f549556", 00:17:12.904 "is_configured": true, 00:17:12.904 "data_offset": 2048, 00:17:12.904 "data_size": 63488 00:17:12.904 } 00:17:12.904 ] 00:17:12.904 }' 00:17:12.904 09:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.904 09:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:12.904 09:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.904 09:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.904 09:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:13.845 09:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:13.845 09:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:13.845 09:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.845 09:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:13.845 09:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:13.845 09:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.845 09:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.845 09:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.845 09:16:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.845 09:16:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.845 09:16:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.845 09:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.845 "name": "raid_bdev1", 00:17:13.845 "uuid": "efe2f3fb-f993-42bb-9522-fec059ea6eb7", 00:17:13.845 "strip_size_kb": 64, 00:17:13.845 "state": "online", 00:17:13.845 "raid_level": "raid5f", 00:17:13.845 "superblock": true, 00:17:13.845 "num_base_bdevs": 4, 00:17:13.845 "num_base_bdevs_discovered": 4, 00:17:13.845 "num_base_bdevs_operational": 4, 00:17:13.845 "process": { 00:17:13.845 "type": "rebuild", 00:17:13.845 "target": "spare", 00:17:13.845 "progress": { 00:17:13.845 "blocks": 65280, 00:17:13.845 "percent": 34 00:17:13.845 } 00:17:13.845 }, 00:17:13.845 "base_bdevs_list": [ 00:17:13.845 { 00:17:13.845 "name": "spare", 00:17:13.845 "uuid": "bec3d8b6-2aec-5364-bb3f-20533c615cd9", 00:17:13.845 "is_configured": true, 00:17:13.845 "data_offset": 2048, 00:17:13.845 "data_size": 63488 00:17:13.845 }, 00:17:13.845 { 00:17:13.845 "name": "BaseBdev2", 00:17:13.845 "uuid": "2182427d-4b62-5ae2-af2b-22934c4e9775", 00:17:13.845 "is_configured": true, 00:17:13.845 "data_offset": 2048, 00:17:13.845 "data_size": 63488 00:17:13.845 }, 00:17:13.845 { 00:17:13.845 "name": "BaseBdev3", 00:17:13.845 "uuid": "2d564980-8744-58fd-8ade-575e6bf3661d", 00:17:13.845 "is_configured": true, 00:17:13.845 "data_offset": 2048, 00:17:13.845 "data_size": 63488 00:17:13.845 }, 00:17:13.845 { 00:17:13.845 "name": "BaseBdev4", 00:17:13.846 "uuid": "c42f48cc-8232-5244-a2f9-3fbf7f549556", 00:17:13.846 "is_configured": true, 00:17:13.846 "data_offset": 2048, 00:17:13.846 "data_size": 63488 00:17:13.846 } 00:17:13.846 ] 00:17:13.846 }' 00:17:13.846 09:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.846 09:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:13.846 09:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.846 09:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:13.846 09:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:14.783 09:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:14.783 09:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:14.783 09:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.783 09:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:14.783 09:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:14.783 09:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.042 09:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.042 09:16:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.042 09:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.042 09:16:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.042 09:16:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.042 09:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.042 "name": "raid_bdev1", 00:17:15.042 "uuid": "efe2f3fb-f993-42bb-9522-fec059ea6eb7", 00:17:15.042 "strip_size_kb": 64, 00:17:15.042 "state": "online", 00:17:15.042 "raid_level": "raid5f", 00:17:15.042 "superblock": true, 00:17:15.042 "num_base_bdevs": 4, 00:17:15.042 "num_base_bdevs_discovered": 4, 00:17:15.042 "num_base_bdevs_operational": 4, 00:17:15.042 "process": { 00:17:15.042 "type": "rebuild", 00:17:15.042 "target": "spare", 00:17:15.042 "progress": { 00:17:15.042 "blocks": 86400, 00:17:15.042 "percent": 45 00:17:15.042 } 00:17:15.042 }, 00:17:15.042 "base_bdevs_list": [ 00:17:15.042 { 00:17:15.042 "name": "spare", 00:17:15.042 "uuid": "bec3d8b6-2aec-5364-bb3f-20533c615cd9", 00:17:15.042 "is_configured": true, 00:17:15.042 "data_offset": 2048, 00:17:15.042 "data_size": 63488 00:17:15.042 }, 00:17:15.042 { 00:17:15.042 "name": "BaseBdev2", 00:17:15.042 "uuid": "2182427d-4b62-5ae2-af2b-22934c4e9775", 00:17:15.042 "is_configured": true, 00:17:15.042 "data_offset": 2048, 00:17:15.042 "data_size": 63488 00:17:15.042 }, 00:17:15.042 { 00:17:15.042 "name": "BaseBdev3", 00:17:15.042 "uuid": "2d564980-8744-58fd-8ade-575e6bf3661d", 00:17:15.042 "is_configured": true, 00:17:15.042 "data_offset": 2048, 00:17:15.042 "data_size": 63488 00:17:15.042 }, 00:17:15.042 { 00:17:15.042 "name": "BaseBdev4", 00:17:15.042 "uuid": "c42f48cc-8232-5244-a2f9-3fbf7f549556", 00:17:15.042 "is_configured": true, 00:17:15.042 "data_offset": 2048, 00:17:15.042 "data_size": 63488 00:17:15.042 } 00:17:15.042 ] 00:17:15.042 }' 00:17:15.042 09:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.042 09:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:15.042 09:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.042 09:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:15.042 09:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:16.013 09:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:16.013 09:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.013 09:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.013 09:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.013 09:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.013 09:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.013 09:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.013 09:16:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.013 09:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.013 09:16:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.013 09:16:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.013 09:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.013 "name": "raid_bdev1", 00:17:16.013 "uuid": "efe2f3fb-f993-42bb-9522-fec059ea6eb7", 00:17:16.013 "strip_size_kb": 64, 00:17:16.013 "state": "online", 00:17:16.013 "raid_level": "raid5f", 00:17:16.013 "superblock": true, 00:17:16.013 "num_base_bdevs": 4, 00:17:16.013 "num_base_bdevs_discovered": 4, 00:17:16.013 "num_base_bdevs_operational": 4, 00:17:16.013 "process": { 00:17:16.013 "type": "rebuild", 00:17:16.013 "target": "spare", 00:17:16.013 "progress": { 00:17:16.013 "blocks": 107520, 00:17:16.013 "percent": 56 00:17:16.013 } 00:17:16.013 }, 00:17:16.013 "base_bdevs_list": [ 00:17:16.013 { 00:17:16.013 "name": "spare", 00:17:16.013 "uuid": "bec3d8b6-2aec-5364-bb3f-20533c615cd9", 00:17:16.013 "is_configured": true, 00:17:16.013 "data_offset": 2048, 00:17:16.013 "data_size": 63488 00:17:16.013 }, 00:17:16.013 { 00:17:16.013 "name": "BaseBdev2", 00:17:16.013 "uuid": "2182427d-4b62-5ae2-af2b-22934c4e9775", 00:17:16.013 "is_configured": true, 00:17:16.013 "data_offset": 2048, 00:17:16.013 "data_size": 63488 00:17:16.013 }, 00:17:16.013 { 00:17:16.013 "name": "BaseBdev3", 00:17:16.013 "uuid": "2d564980-8744-58fd-8ade-575e6bf3661d", 00:17:16.013 "is_configured": true, 00:17:16.013 "data_offset": 2048, 00:17:16.013 "data_size": 63488 00:17:16.013 }, 00:17:16.013 { 00:17:16.013 "name": "BaseBdev4", 00:17:16.013 "uuid": "c42f48cc-8232-5244-a2f9-3fbf7f549556", 00:17:16.013 "is_configured": true, 00:17:16.013 "data_offset": 2048, 00:17:16.013 "data_size": 63488 00:17:16.013 } 00:17:16.013 ] 00:17:16.013 }' 00:17:16.013 09:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.273 09:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:16.273 09:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.273 09:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:16.273 09:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:17.213 09:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:17.213 09:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.213 09:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.213 09:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.213 09:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.213 09:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.213 09:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.213 09:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.213 09:16:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.213 09:16:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.213 09:16:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.213 09:16:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.213 "name": "raid_bdev1", 00:17:17.213 "uuid": "efe2f3fb-f993-42bb-9522-fec059ea6eb7", 00:17:17.213 "strip_size_kb": 64, 00:17:17.213 "state": "online", 00:17:17.213 "raid_level": "raid5f", 00:17:17.213 "superblock": true, 00:17:17.213 "num_base_bdevs": 4, 00:17:17.213 "num_base_bdevs_discovered": 4, 00:17:17.213 "num_base_bdevs_operational": 4, 00:17:17.213 "process": { 00:17:17.213 "type": "rebuild", 00:17:17.213 "target": "spare", 00:17:17.213 "progress": { 00:17:17.213 "blocks": 130560, 00:17:17.213 "percent": 68 00:17:17.213 } 00:17:17.213 }, 00:17:17.213 "base_bdevs_list": [ 00:17:17.213 { 00:17:17.213 "name": "spare", 00:17:17.213 "uuid": "bec3d8b6-2aec-5364-bb3f-20533c615cd9", 00:17:17.213 "is_configured": true, 00:17:17.213 "data_offset": 2048, 00:17:17.213 "data_size": 63488 00:17:17.213 }, 00:17:17.213 { 00:17:17.213 "name": "BaseBdev2", 00:17:17.213 "uuid": "2182427d-4b62-5ae2-af2b-22934c4e9775", 00:17:17.213 "is_configured": true, 00:17:17.213 "data_offset": 2048, 00:17:17.213 "data_size": 63488 00:17:17.213 }, 00:17:17.213 { 00:17:17.213 "name": "BaseBdev3", 00:17:17.213 "uuid": "2d564980-8744-58fd-8ade-575e6bf3661d", 00:17:17.213 "is_configured": true, 00:17:17.213 "data_offset": 2048, 00:17:17.213 "data_size": 63488 00:17:17.213 }, 00:17:17.213 { 00:17:17.213 "name": "BaseBdev4", 00:17:17.213 "uuid": "c42f48cc-8232-5244-a2f9-3fbf7f549556", 00:17:17.213 "is_configured": true, 00:17:17.213 "data_offset": 2048, 00:17:17.213 "data_size": 63488 00:17:17.213 } 00:17:17.213 ] 00:17:17.213 }' 00:17:17.213 09:16:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.213 09:16:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:17.213 09:16:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.475 09:16:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.475 09:16:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:18.434 09:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:18.434 09:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:18.434 09:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.434 09:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:18.434 09:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:18.434 09:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.434 09:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.434 09:16:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.434 09:16:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.434 09:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.434 09:16:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.434 09:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.434 "name": "raid_bdev1", 00:17:18.434 "uuid": "efe2f3fb-f993-42bb-9522-fec059ea6eb7", 00:17:18.434 "strip_size_kb": 64, 00:17:18.434 "state": "online", 00:17:18.434 "raid_level": "raid5f", 00:17:18.434 "superblock": true, 00:17:18.434 "num_base_bdevs": 4, 00:17:18.434 "num_base_bdevs_discovered": 4, 00:17:18.434 "num_base_bdevs_operational": 4, 00:17:18.434 "process": { 00:17:18.434 "type": "rebuild", 00:17:18.434 "target": "spare", 00:17:18.434 "progress": { 00:17:18.434 "blocks": 151680, 00:17:18.434 "percent": 79 00:17:18.434 } 00:17:18.434 }, 00:17:18.434 "base_bdevs_list": [ 00:17:18.434 { 00:17:18.434 "name": "spare", 00:17:18.434 "uuid": "bec3d8b6-2aec-5364-bb3f-20533c615cd9", 00:17:18.434 "is_configured": true, 00:17:18.434 "data_offset": 2048, 00:17:18.434 "data_size": 63488 00:17:18.434 }, 00:17:18.434 { 00:17:18.434 "name": "BaseBdev2", 00:17:18.434 "uuid": "2182427d-4b62-5ae2-af2b-22934c4e9775", 00:17:18.434 "is_configured": true, 00:17:18.434 "data_offset": 2048, 00:17:18.434 "data_size": 63488 00:17:18.434 }, 00:17:18.434 { 00:17:18.434 "name": "BaseBdev3", 00:17:18.434 "uuid": "2d564980-8744-58fd-8ade-575e6bf3661d", 00:17:18.434 "is_configured": true, 00:17:18.434 "data_offset": 2048, 00:17:18.434 "data_size": 63488 00:17:18.434 }, 00:17:18.434 { 00:17:18.434 "name": "BaseBdev4", 00:17:18.434 "uuid": "c42f48cc-8232-5244-a2f9-3fbf7f549556", 00:17:18.434 "is_configured": true, 00:17:18.434 "data_offset": 2048, 00:17:18.434 "data_size": 63488 00:17:18.434 } 00:17:18.434 ] 00:17:18.434 }' 00:17:18.434 09:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.434 09:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:18.434 09:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.434 09:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:18.434 09:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:19.409 09:16:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:19.409 09:16:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.409 09:16:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.409 09:16:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.409 09:16:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.409 09:16:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.409 09:16:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.409 09:16:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.409 09:16:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.409 09:16:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.673 09:16:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.673 09:16:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.673 "name": "raid_bdev1", 00:17:19.673 "uuid": "efe2f3fb-f993-42bb-9522-fec059ea6eb7", 00:17:19.673 "strip_size_kb": 64, 00:17:19.673 "state": "online", 00:17:19.673 "raid_level": "raid5f", 00:17:19.673 "superblock": true, 00:17:19.673 "num_base_bdevs": 4, 00:17:19.673 "num_base_bdevs_discovered": 4, 00:17:19.673 "num_base_bdevs_operational": 4, 00:17:19.673 "process": { 00:17:19.673 "type": "rebuild", 00:17:19.673 "target": "spare", 00:17:19.673 "progress": { 00:17:19.673 "blocks": 174720, 00:17:19.673 "percent": 91 00:17:19.673 } 00:17:19.673 }, 00:17:19.673 "base_bdevs_list": [ 00:17:19.673 { 00:17:19.673 "name": "spare", 00:17:19.673 "uuid": "bec3d8b6-2aec-5364-bb3f-20533c615cd9", 00:17:19.673 "is_configured": true, 00:17:19.673 "data_offset": 2048, 00:17:19.673 "data_size": 63488 00:17:19.673 }, 00:17:19.673 { 00:17:19.673 "name": "BaseBdev2", 00:17:19.673 "uuid": "2182427d-4b62-5ae2-af2b-22934c4e9775", 00:17:19.673 "is_configured": true, 00:17:19.673 "data_offset": 2048, 00:17:19.673 "data_size": 63488 00:17:19.673 }, 00:17:19.673 { 00:17:19.673 "name": "BaseBdev3", 00:17:19.673 "uuid": "2d564980-8744-58fd-8ade-575e6bf3661d", 00:17:19.673 "is_configured": true, 00:17:19.673 "data_offset": 2048, 00:17:19.673 "data_size": 63488 00:17:19.673 }, 00:17:19.673 { 00:17:19.673 "name": "BaseBdev4", 00:17:19.673 "uuid": "c42f48cc-8232-5244-a2f9-3fbf7f549556", 00:17:19.673 "is_configured": true, 00:17:19.673 "data_offset": 2048, 00:17:19.673 "data_size": 63488 00:17:19.673 } 00:17:19.673 ] 00:17:19.673 }' 00:17:19.673 09:16:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.673 09:16:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:19.673 09:16:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.673 09:16:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:19.673 09:16:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:20.613 [2024-10-15 09:16:38.214674] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:20.613 [2024-10-15 09:16:38.214785] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:20.613 [2024-10-15 09:16:38.214951] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.613 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:20.613 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.613 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.613 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.613 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.613 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.613 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.613 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.613 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.613 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.613 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.613 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.613 "name": "raid_bdev1", 00:17:20.613 "uuid": "efe2f3fb-f993-42bb-9522-fec059ea6eb7", 00:17:20.613 "strip_size_kb": 64, 00:17:20.613 "state": "online", 00:17:20.613 "raid_level": "raid5f", 00:17:20.613 "superblock": true, 00:17:20.613 "num_base_bdevs": 4, 00:17:20.613 "num_base_bdevs_discovered": 4, 00:17:20.613 "num_base_bdevs_operational": 4, 00:17:20.613 "base_bdevs_list": [ 00:17:20.613 { 00:17:20.613 "name": "spare", 00:17:20.613 "uuid": "bec3d8b6-2aec-5364-bb3f-20533c615cd9", 00:17:20.613 "is_configured": true, 00:17:20.613 "data_offset": 2048, 00:17:20.613 "data_size": 63488 00:17:20.613 }, 00:17:20.613 { 00:17:20.613 "name": "BaseBdev2", 00:17:20.613 "uuid": "2182427d-4b62-5ae2-af2b-22934c4e9775", 00:17:20.613 "is_configured": true, 00:17:20.613 "data_offset": 2048, 00:17:20.613 "data_size": 63488 00:17:20.613 }, 00:17:20.613 { 00:17:20.613 "name": "BaseBdev3", 00:17:20.613 "uuid": "2d564980-8744-58fd-8ade-575e6bf3661d", 00:17:20.613 "is_configured": true, 00:17:20.613 "data_offset": 2048, 00:17:20.613 "data_size": 63488 00:17:20.613 }, 00:17:20.613 { 00:17:20.613 "name": "BaseBdev4", 00:17:20.613 "uuid": "c42f48cc-8232-5244-a2f9-3fbf7f549556", 00:17:20.613 "is_configured": true, 00:17:20.613 "data_offset": 2048, 00:17:20.613 "data_size": 63488 00:17:20.613 } 00:17:20.613 ] 00:17:20.613 }' 00:17:20.613 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.872 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:20.872 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.872 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:20.872 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:20.872 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:20.872 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.873 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:20.873 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:20.873 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.873 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.873 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.873 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.873 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.873 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.873 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.873 "name": "raid_bdev1", 00:17:20.873 "uuid": "efe2f3fb-f993-42bb-9522-fec059ea6eb7", 00:17:20.873 "strip_size_kb": 64, 00:17:20.873 "state": "online", 00:17:20.873 "raid_level": "raid5f", 00:17:20.873 "superblock": true, 00:17:20.873 "num_base_bdevs": 4, 00:17:20.873 "num_base_bdevs_discovered": 4, 00:17:20.873 "num_base_bdevs_operational": 4, 00:17:20.873 "base_bdevs_list": [ 00:17:20.873 { 00:17:20.873 "name": "spare", 00:17:20.873 "uuid": "bec3d8b6-2aec-5364-bb3f-20533c615cd9", 00:17:20.873 "is_configured": true, 00:17:20.873 "data_offset": 2048, 00:17:20.873 "data_size": 63488 00:17:20.873 }, 00:17:20.873 { 00:17:20.873 "name": "BaseBdev2", 00:17:20.873 "uuid": "2182427d-4b62-5ae2-af2b-22934c4e9775", 00:17:20.873 "is_configured": true, 00:17:20.873 "data_offset": 2048, 00:17:20.873 "data_size": 63488 00:17:20.873 }, 00:17:20.873 { 00:17:20.873 "name": "BaseBdev3", 00:17:20.873 "uuid": "2d564980-8744-58fd-8ade-575e6bf3661d", 00:17:20.873 "is_configured": true, 00:17:20.873 "data_offset": 2048, 00:17:20.873 "data_size": 63488 00:17:20.873 }, 00:17:20.873 { 00:17:20.873 "name": "BaseBdev4", 00:17:20.873 "uuid": "c42f48cc-8232-5244-a2f9-3fbf7f549556", 00:17:20.873 "is_configured": true, 00:17:20.873 "data_offset": 2048, 00:17:20.873 "data_size": 63488 00:17:20.873 } 00:17:20.873 ] 00:17:20.873 }' 00:17:20.873 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.873 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:20.873 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.873 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:20.873 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:20.873 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.873 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.873 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:20.873 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:20.873 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:20.873 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.873 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.873 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.873 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.873 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.873 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.873 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.873 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.873 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.873 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.873 "name": "raid_bdev1", 00:17:20.873 "uuid": "efe2f3fb-f993-42bb-9522-fec059ea6eb7", 00:17:20.873 "strip_size_kb": 64, 00:17:20.873 "state": "online", 00:17:20.873 "raid_level": "raid5f", 00:17:20.873 "superblock": true, 00:17:20.873 "num_base_bdevs": 4, 00:17:20.873 "num_base_bdevs_discovered": 4, 00:17:20.873 "num_base_bdevs_operational": 4, 00:17:20.873 "base_bdevs_list": [ 00:17:20.873 { 00:17:20.873 "name": "spare", 00:17:20.873 "uuid": "bec3d8b6-2aec-5364-bb3f-20533c615cd9", 00:17:20.873 "is_configured": true, 00:17:20.873 "data_offset": 2048, 00:17:20.873 "data_size": 63488 00:17:20.873 }, 00:17:20.873 { 00:17:20.873 "name": "BaseBdev2", 00:17:20.873 "uuid": "2182427d-4b62-5ae2-af2b-22934c4e9775", 00:17:20.873 "is_configured": true, 00:17:20.873 "data_offset": 2048, 00:17:20.873 "data_size": 63488 00:17:20.873 }, 00:17:20.873 { 00:17:20.873 "name": "BaseBdev3", 00:17:20.873 "uuid": "2d564980-8744-58fd-8ade-575e6bf3661d", 00:17:20.873 "is_configured": true, 00:17:20.873 "data_offset": 2048, 00:17:20.873 "data_size": 63488 00:17:20.873 }, 00:17:20.873 { 00:17:20.873 "name": "BaseBdev4", 00:17:20.873 "uuid": "c42f48cc-8232-5244-a2f9-3fbf7f549556", 00:17:20.873 "is_configured": true, 00:17:20.873 "data_offset": 2048, 00:17:20.873 "data_size": 63488 00:17:20.873 } 00:17:20.873 ] 00:17:20.873 }' 00:17:20.873 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.873 09:16:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.441 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:21.441 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.441 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.441 [2024-10-15 09:16:39.184505] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:21.441 [2024-10-15 09:16:39.184551] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:21.441 [2024-10-15 09:16:39.184654] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:21.441 [2024-10-15 09:16:39.184796] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:21.441 [2024-10-15 09:16:39.184831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:21.441 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.441 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:21.441 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.441 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.441 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.441 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.441 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:21.441 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:21.441 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:21.441 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:21.441 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:21.441 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:21.442 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:21.442 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:21.442 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:21.442 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:21.442 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:21.442 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:21.442 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:21.700 /dev/nbd0 00:17:21.701 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:21.701 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:21.701 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:21.701 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:17:21.701 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:21.701 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:21.701 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:21.701 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:17:21.701 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:21.701 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:21.701 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:21.701 1+0 records in 00:17:21.701 1+0 records out 00:17:21.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259698 s, 15.8 MB/s 00:17:21.701 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:21.701 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:17:21.701 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:21.701 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:21.701 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:17:21.701 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:21.701 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:21.701 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:21.959 /dev/nbd1 00:17:21.959 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:21.959 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:21.959 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:21.959 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:17:21.959 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:21.959 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:21.959 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:21.959 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:17:21.959 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:21.959 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:21.959 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:21.959 1+0 records in 00:17:21.959 1+0 records out 00:17:21.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428719 s, 9.6 MB/s 00:17:21.959 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:21.959 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:17:21.959 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:21.959 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:21.959 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:17:21.959 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:21.959 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:21.959 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:22.264 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:22.264 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:22.264 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:22.264 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:22.264 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:22.264 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:22.264 09:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:22.523 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:22.523 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:22.523 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:22.523 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:22.523 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:22.523 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:22.523 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:22.523 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:22.523 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:22.523 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.782 [2024-10-15 09:16:40.479353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:22.782 [2024-10-15 09:16:40.479417] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.782 [2024-10-15 09:16:40.479443] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:22.782 [2024-10-15 09:16:40.479454] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.782 [2024-10-15 09:16:40.481915] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.782 [2024-10-15 09:16:40.481956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:22.782 [2024-10-15 09:16:40.482027] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:22.782 [2024-10-15 09:16:40.482088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:22.782 [2024-10-15 09:16:40.482293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:22.782 [2024-10-15 09:16:40.482401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:22.782 [2024-10-15 09:16:40.482494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:22.782 spare 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.782 [2024-10-15 09:16:40.582428] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:22.782 [2024-10-15 09:16:40.582474] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:22.782 [2024-10-15 09:16:40.582918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:17:22.782 [2024-10-15 09:16:40.591226] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:22.782 [2024-10-15 09:16:40.591250] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:22.782 [2024-10-15 09:16:40.591473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.782 "name": "raid_bdev1", 00:17:22.782 "uuid": "efe2f3fb-f993-42bb-9522-fec059ea6eb7", 00:17:22.782 "strip_size_kb": 64, 00:17:22.782 "state": "online", 00:17:22.782 "raid_level": "raid5f", 00:17:22.782 "superblock": true, 00:17:22.782 "num_base_bdevs": 4, 00:17:22.782 "num_base_bdevs_discovered": 4, 00:17:22.782 "num_base_bdevs_operational": 4, 00:17:22.782 "base_bdevs_list": [ 00:17:22.782 { 00:17:22.782 "name": "spare", 00:17:22.782 "uuid": "bec3d8b6-2aec-5364-bb3f-20533c615cd9", 00:17:22.782 "is_configured": true, 00:17:22.782 "data_offset": 2048, 00:17:22.782 "data_size": 63488 00:17:22.782 }, 00:17:22.782 { 00:17:22.782 "name": "BaseBdev2", 00:17:22.782 "uuid": "2182427d-4b62-5ae2-af2b-22934c4e9775", 00:17:22.782 "is_configured": true, 00:17:22.782 "data_offset": 2048, 00:17:22.782 "data_size": 63488 00:17:22.782 }, 00:17:22.782 { 00:17:22.782 "name": "BaseBdev3", 00:17:22.782 "uuid": "2d564980-8744-58fd-8ade-575e6bf3661d", 00:17:22.782 "is_configured": true, 00:17:22.782 "data_offset": 2048, 00:17:22.782 "data_size": 63488 00:17:22.782 }, 00:17:22.782 { 00:17:22.782 "name": "BaseBdev4", 00:17:22.782 "uuid": "c42f48cc-8232-5244-a2f9-3fbf7f549556", 00:17:22.782 "is_configured": true, 00:17:22.782 "data_offset": 2048, 00:17:22.782 "data_size": 63488 00:17:22.782 } 00:17:22.782 ] 00:17:22.782 }' 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.782 09:16:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.353 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:23.353 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.353 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:23.353 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:23.353 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.353 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.353 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.353 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.353 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.353 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.353 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.353 "name": "raid_bdev1", 00:17:23.353 "uuid": "efe2f3fb-f993-42bb-9522-fec059ea6eb7", 00:17:23.353 "strip_size_kb": 64, 00:17:23.353 "state": "online", 00:17:23.353 "raid_level": "raid5f", 00:17:23.353 "superblock": true, 00:17:23.353 "num_base_bdevs": 4, 00:17:23.353 "num_base_bdevs_discovered": 4, 00:17:23.353 "num_base_bdevs_operational": 4, 00:17:23.353 "base_bdevs_list": [ 00:17:23.353 { 00:17:23.353 "name": "spare", 00:17:23.353 "uuid": "bec3d8b6-2aec-5364-bb3f-20533c615cd9", 00:17:23.353 "is_configured": true, 00:17:23.353 "data_offset": 2048, 00:17:23.353 "data_size": 63488 00:17:23.353 }, 00:17:23.353 { 00:17:23.353 "name": "BaseBdev2", 00:17:23.353 "uuid": "2182427d-4b62-5ae2-af2b-22934c4e9775", 00:17:23.353 "is_configured": true, 00:17:23.353 "data_offset": 2048, 00:17:23.353 "data_size": 63488 00:17:23.353 }, 00:17:23.353 { 00:17:23.353 "name": "BaseBdev3", 00:17:23.353 "uuid": "2d564980-8744-58fd-8ade-575e6bf3661d", 00:17:23.353 "is_configured": true, 00:17:23.353 "data_offset": 2048, 00:17:23.353 "data_size": 63488 00:17:23.353 }, 00:17:23.353 { 00:17:23.353 "name": "BaseBdev4", 00:17:23.353 "uuid": "c42f48cc-8232-5244-a2f9-3fbf7f549556", 00:17:23.353 "is_configured": true, 00:17:23.353 "data_offset": 2048, 00:17:23.353 "data_size": 63488 00:17:23.353 } 00:17:23.353 ] 00:17:23.353 }' 00:17:23.353 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.353 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:23.353 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.353 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:23.353 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.353 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:23.353 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.353 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.353 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.611 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:23.611 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:23.611 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.611 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.611 [2024-10-15 09:16:41.283452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:23.611 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.611 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:23.611 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.611 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.611 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:23.611 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:23.611 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:23.611 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.611 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.611 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.611 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.611 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.611 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.611 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.611 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.611 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.611 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.611 "name": "raid_bdev1", 00:17:23.611 "uuid": "efe2f3fb-f993-42bb-9522-fec059ea6eb7", 00:17:23.611 "strip_size_kb": 64, 00:17:23.611 "state": "online", 00:17:23.611 "raid_level": "raid5f", 00:17:23.611 "superblock": true, 00:17:23.611 "num_base_bdevs": 4, 00:17:23.611 "num_base_bdevs_discovered": 3, 00:17:23.611 "num_base_bdevs_operational": 3, 00:17:23.611 "base_bdevs_list": [ 00:17:23.611 { 00:17:23.611 "name": null, 00:17:23.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.611 "is_configured": false, 00:17:23.611 "data_offset": 0, 00:17:23.611 "data_size": 63488 00:17:23.611 }, 00:17:23.611 { 00:17:23.611 "name": "BaseBdev2", 00:17:23.611 "uuid": "2182427d-4b62-5ae2-af2b-22934c4e9775", 00:17:23.611 "is_configured": true, 00:17:23.611 "data_offset": 2048, 00:17:23.611 "data_size": 63488 00:17:23.611 }, 00:17:23.611 { 00:17:23.611 "name": "BaseBdev3", 00:17:23.611 "uuid": "2d564980-8744-58fd-8ade-575e6bf3661d", 00:17:23.611 "is_configured": true, 00:17:23.611 "data_offset": 2048, 00:17:23.611 "data_size": 63488 00:17:23.611 }, 00:17:23.611 { 00:17:23.611 "name": "BaseBdev4", 00:17:23.611 "uuid": "c42f48cc-8232-5244-a2f9-3fbf7f549556", 00:17:23.611 "is_configured": true, 00:17:23.611 "data_offset": 2048, 00:17:23.611 "data_size": 63488 00:17:23.611 } 00:17:23.611 ] 00:17:23.611 }' 00:17:23.611 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.611 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.178 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:24.178 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.178 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.178 [2024-10-15 09:16:41.790677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:24.178 [2024-10-15 09:16:41.790946] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:24.178 [2024-10-15 09:16:41.790973] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:24.178 [2024-10-15 09:16:41.791011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:24.178 [2024-10-15 09:16:41.807451] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:17:24.178 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.178 09:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:24.178 [2024-10-15 09:16:41.818301] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:25.118 09:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:25.118 09:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.118 09:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:25.118 09:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:25.118 09:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.118 09:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.118 09:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.118 09:16:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.118 09:16:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.118 09:16:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.118 09:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.118 "name": "raid_bdev1", 00:17:25.118 "uuid": "efe2f3fb-f993-42bb-9522-fec059ea6eb7", 00:17:25.118 "strip_size_kb": 64, 00:17:25.118 "state": "online", 00:17:25.118 "raid_level": "raid5f", 00:17:25.118 "superblock": true, 00:17:25.118 "num_base_bdevs": 4, 00:17:25.118 "num_base_bdevs_discovered": 4, 00:17:25.118 "num_base_bdevs_operational": 4, 00:17:25.118 "process": { 00:17:25.118 "type": "rebuild", 00:17:25.118 "target": "spare", 00:17:25.118 "progress": { 00:17:25.118 "blocks": 17280, 00:17:25.118 "percent": 9 00:17:25.119 } 00:17:25.119 }, 00:17:25.119 "base_bdevs_list": [ 00:17:25.119 { 00:17:25.119 "name": "spare", 00:17:25.119 "uuid": "bec3d8b6-2aec-5364-bb3f-20533c615cd9", 00:17:25.119 "is_configured": true, 00:17:25.119 "data_offset": 2048, 00:17:25.119 "data_size": 63488 00:17:25.119 }, 00:17:25.119 { 00:17:25.119 "name": "BaseBdev2", 00:17:25.119 "uuid": "2182427d-4b62-5ae2-af2b-22934c4e9775", 00:17:25.119 "is_configured": true, 00:17:25.119 "data_offset": 2048, 00:17:25.119 "data_size": 63488 00:17:25.119 }, 00:17:25.119 { 00:17:25.119 "name": "BaseBdev3", 00:17:25.119 "uuid": "2d564980-8744-58fd-8ade-575e6bf3661d", 00:17:25.119 "is_configured": true, 00:17:25.119 "data_offset": 2048, 00:17:25.119 "data_size": 63488 00:17:25.119 }, 00:17:25.119 { 00:17:25.119 "name": "BaseBdev4", 00:17:25.119 "uuid": "c42f48cc-8232-5244-a2f9-3fbf7f549556", 00:17:25.119 "is_configured": true, 00:17:25.119 "data_offset": 2048, 00:17:25.119 "data_size": 63488 00:17:25.119 } 00:17:25.119 ] 00:17:25.119 }' 00:17:25.119 09:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.119 09:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:25.119 09:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.119 09:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:25.119 09:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:25.119 09:16:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.119 09:16:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.119 [2024-10-15 09:16:42.946024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:25.379 [2024-10-15 09:16:43.028096] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:25.379 [2024-10-15 09:16:43.028194] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.379 [2024-10-15 09:16:43.028213] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:25.379 [2024-10-15 09:16:43.028222] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:25.379 09:16:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.379 09:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:25.379 09:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.379 09:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.379 09:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:25.379 09:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:25.379 09:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:25.379 09:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.379 09:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.379 09:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.379 09:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.379 09:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.379 09:16:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.379 09:16:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.379 09:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.379 09:16:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.379 09:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.379 "name": "raid_bdev1", 00:17:25.379 "uuid": "efe2f3fb-f993-42bb-9522-fec059ea6eb7", 00:17:25.379 "strip_size_kb": 64, 00:17:25.379 "state": "online", 00:17:25.379 "raid_level": "raid5f", 00:17:25.379 "superblock": true, 00:17:25.379 "num_base_bdevs": 4, 00:17:25.379 "num_base_bdevs_discovered": 3, 00:17:25.379 "num_base_bdevs_operational": 3, 00:17:25.379 "base_bdevs_list": [ 00:17:25.379 { 00:17:25.379 "name": null, 00:17:25.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.379 "is_configured": false, 00:17:25.379 "data_offset": 0, 00:17:25.379 "data_size": 63488 00:17:25.379 }, 00:17:25.379 { 00:17:25.379 "name": "BaseBdev2", 00:17:25.379 "uuid": "2182427d-4b62-5ae2-af2b-22934c4e9775", 00:17:25.379 "is_configured": true, 00:17:25.379 "data_offset": 2048, 00:17:25.379 "data_size": 63488 00:17:25.379 }, 00:17:25.379 { 00:17:25.379 "name": "BaseBdev3", 00:17:25.379 "uuid": "2d564980-8744-58fd-8ade-575e6bf3661d", 00:17:25.379 "is_configured": true, 00:17:25.379 "data_offset": 2048, 00:17:25.379 "data_size": 63488 00:17:25.379 }, 00:17:25.379 { 00:17:25.379 "name": "BaseBdev4", 00:17:25.379 "uuid": "c42f48cc-8232-5244-a2f9-3fbf7f549556", 00:17:25.379 "is_configured": true, 00:17:25.379 "data_offset": 2048, 00:17:25.379 "data_size": 63488 00:17:25.379 } 00:17:25.379 ] 00:17:25.379 }' 00:17:25.379 09:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.379 09:16:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.947 09:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:25.947 09:16:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.947 09:16:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.948 [2024-10-15 09:16:43.554347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:25.948 [2024-10-15 09:16:43.554434] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.948 [2024-10-15 09:16:43.554472] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:25.948 [2024-10-15 09:16:43.554488] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.948 [2024-10-15 09:16:43.555125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.948 [2024-10-15 09:16:43.555170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:25.948 [2024-10-15 09:16:43.555291] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:25.948 [2024-10-15 09:16:43.555319] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:25.948 [2024-10-15 09:16:43.555336] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:25.948 [2024-10-15 09:16:43.555371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:25.948 [2024-10-15 09:16:43.573881] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:17:25.948 spare 00:17:25.948 09:16:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.948 09:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:25.948 [2024-10-15 09:16:43.585528] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:26.887 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:26.887 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.887 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:26.887 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:26.887 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.887 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.887 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.887 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.887 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.887 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.887 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.887 "name": "raid_bdev1", 00:17:26.887 "uuid": "efe2f3fb-f993-42bb-9522-fec059ea6eb7", 00:17:26.887 "strip_size_kb": 64, 00:17:26.887 "state": "online", 00:17:26.887 "raid_level": "raid5f", 00:17:26.887 "superblock": true, 00:17:26.887 "num_base_bdevs": 4, 00:17:26.887 "num_base_bdevs_discovered": 4, 00:17:26.887 "num_base_bdevs_operational": 4, 00:17:26.887 "process": { 00:17:26.887 "type": "rebuild", 00:17:26.887 "target": "spare", 00:17:26.887 "progress": { 00:17:26.887 "blocks": 17280, 00:17:26.887 "percent": 9 00:17:26.887 } 00:17:26.887 }, 00:17:26.887 "base_bdevs_list": [ 00:17:26.887 { 00:17:26.887 "name": "spare", 00:17:26.887 "uuid": "bec3d8b6-2aec-5364-bb3f-20533c615cd9", 00:17:26.887 "is_configured": true, 00:17:26.887 "data_offset": 2048, 00:17:26.887 "data_size": 63488 00:17:26.887 }, 00:17:26.887 { 00:17:26.887 "name": "BaseBdev2", 00:17:26.887 "uuid": "2182427d-4b62-5ae2-af2b-22934c4e9775", 00:17:26.887 "is_configured": true, 00:17:26.887 "data_offset": 2048, 00:17:26.887 "data_size": 63488 00:17:26.887 }, 00:17:26.887 { 00:17:26.887 "name": "BaseBdev3", 00:17:26.888 "uuid": "2d564980-8744-58fd-8ade-575e6bf3661d", 00:17:26.888 "is_configured": true, 00:17:26.888 "data_offset": 2048, 00:17:26.888 "data_size": 63488 00:17:26.888 }, 00:17:26.888 { 00:17:26.888 "name": "BaseBdev4", 00:17:26.888 "uuid": "c42f48cc-8232-5244-a2f9-3fbf7f549556", 00:17:26.888 "is_configured": true, 00:17:26.888 "data_offset": 2048, 00:17:26.888 "data_size": 63488 00:17:26.888 } 00:17:26.888 ] 00:17:26.888 }' 00:17:26.888 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.888 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:26.888 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.888 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:26.888 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:26.888 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.888 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.888 [2024-10-15 09:16:44.744677] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:27.147 [2024-10-15 09:16:44.795278] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:27.147 [2024-10-15 09:16:44.795365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.147 [2024-10-15 09:16:44.795390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:27.147 [2024-10-15 09:16:44.795399] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:27.147 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.147 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:27.147 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:27.147 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.147 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:27.147 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:27.147 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:27.147 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.147 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.147 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.147 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.147 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.147 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.147 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.147 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.147 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.147 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.147 "name": "raid_bdev1", 00:17:27.147 "uuid": "efe2f3fb-f993-42bb-9522-fec059ea6eb7", 00:17:27.147 "strip_size_kb": 64, 00:17:27.147 "state": "online", 00:17:27.147 "raid_level": "raid5f", 00:17:27.147 "superblock": true, 00:17:27.147 "num_base_bdevs": 4, 00:17:27.147 "num_base_bdevs_discovered": 3, 00:17:27.147 "num_base_bdevs_operational": 3, 00:17:27.147 "base_bdevs_list": [ 00:17:27.147 { 00:17:27.147 "name": null, 00:17:27.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.147 "is_configured": false, 00:17:27.147 "data_offset": 0, 00:17:27.147 "data_size": 63488 00:17:27.147 }, 00:17:27.147 { 00:17:27.147 "name": "BaseBdev2", 00:17:27.147 "uuid": "2182427d-4b62-5ae2-af2b-22934c4e9775", 00:17:27.147 "is_configured": true, 00:17:27.147 "data_offset": 2048, 00:17:27.147 "data_size": 63488 00:17:27.147 }, 00:17:27.147 { 00:17:27.147 "name": "BaseBdev3", 00:17:27.147 "uuid": "2d564980-8744-58fd-8ade-575e6bf3661d", 00:17:27.147 "is_configured": true, 00:17:27.147 "data_offset": 2048, 00:17:27.147 "data_size": 63488 00:17:27.147 }, 00:17:27.147 { 00:17:27.147 "name": "BaseBdev4", 00:17:27.147 "uuid": "c42f48cc-8232-5244-a2f9-3fbf7f549556", 00:17:27.147 "is_configured": true, 00:17:27.147 "data_offset": 2048, 00:17:27.147 "data_size": 63488 00:17:27.147 } 00:17:27.147 ] 00:17:27.147 }' 00:17:27.147 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.147 09:16:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.717 09:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:27.717 09:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.717 09:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:27.717 09:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:27.717 09:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.717 09:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.717 09:16:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.717 09:16:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.717 09:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.717 09:16:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.717 09:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.717 "name": "raid_bdev1", 00:17:27.717 "uuid": "efe2f3fb-f993-42bb-9522-fec059ea6eb7", 00:17:27.717 "strip_size_kb": 64, 00:17:27.717 "state": "online", 00:17:27.717 "raid_level": "raid5f", 00:17:27.717 "superblock": true, 00:17:27.717 "num_base_bdevs": 4, 00:17:27.717 "num_base_bdevs_discovered": 3, 00:17:27.717 "num_base_bdevs_operational": 3, 00:17:27.717 "base_bdevs_list": [ 00:17:27.717 { 00:17:27.717 "name": null, 00:17:27.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.717 "is_configured": false, 00:17:27.717 "data_offset": 0, 00:17:27.717 "data_size": 63488 00:17:27.717 }, 00:17:27.717 { 00:17:27.717 "name": "BaseBdev2", 00:17:27.717 "uuid": "2182427d-4b62-5ae2-af2b-22934c4e9775", 00:17:27.717 "is_configured": true, 00:17:27.717 "data_offset": 2048, 00:17:27.717 "data_size": 63488 00:17:27.717 }, 00:17:27.717 { 00:17:27.717 "name": "BaseBdev3", 00:17:27.717 "uuid": "2d564980-8744-58fd-8ade-575e6bf3661d", 00:17:27.717 "is_configured": true, 00:17:27.717 "data_offset": 2048, 00:17:27.717 "data_size": 63488 00:17:27.717 }, 00:17:27.717 { 00:17:27.717 "name": "BaseBdev4", 00:17:27.717 "uuid": "c42f48cc-8232-5244-a2f9-3fbf7f549556", 00:17:27.717 "is_configured": true, 00:17:27.717 "data_offset": 2048, 00:17:27.717 "data_size": 63488 00:17:27.717 } 00:17:27.717 ] 00:17:27.717 }' 00:17:27.717 09:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.717 09:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:27.717 09:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.717 09:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:27.717 09:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:27.717 09:16:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.717 09:16:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.717 09:16:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.717 09:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:27.717 09:16:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.717 09:16:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.717 [2024-10-15 09:16:45.481121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:27.717 [2024-10-15 09:16:45.481192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.717 [2024-10-15 09:16:45.481220] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:27.717 [2024-10-15 09:16:45.481232] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.717 [2024-10-15 09:16:45.481837] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.717 [2024-10-15 09:16:45.481877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:27.717 [2024-10-15 09:16:45.481981] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:27.717 [2024-10-15 09:16:45.482008] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:27.718 [2024-10-15 09:16:45.482024] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:27.718 [2024-10-15 09:16:45.482036] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:27.718 BaseBdev1 00:17:27.718 09:16:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.718 09:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:28.656 09:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:28.656 09:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:28.656 09:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.656 09:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:28.656 09:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:28.656 09:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:28.656 09:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.656 09:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.657 09:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.657 09:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.657 09:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.657 09:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.657 09:16:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.657 09:16:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.657 09:16:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.657 09:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.657 "name": "raid_bdev1", 00:17:28.657 "uuid": "efe2f3fb-f993-42bb-9522-fec059ea6eb7", 00:17:28.657 "strip_size_kb": 64, 00:17:28.657 "state": "online", 00:17:28.657 "raid_level": "raid5f", 00:17:28.657 "superblock": true, 00:17:28.657 "num_base_bdevs": 4, 00:17:28.657 "num_base_bdevs_discovered": 3, 00:17:28.657 "num_base_bdevs_operational": 3, 00:17:28.657 "base_bdevs_list": [ 00:17:28.657 { 00:17:28.657 "name": null, 00:17:28.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.657 "is_configured": false, 00:17:28.657 "data_offset": 0, 00:17:28.657 "data_size": 63488 00:17:28.657 }, 00:17:28.657 { 00:17:28.657 "name": "BaseBdev2", 00:17:28.657 "uuid": "2182427d-4b62-5ae2-af2b-22934c4e9775", 00:17:28.657 "is_configured": true, 00:17:28.657 "data_offset": 2048, 00:17:28.657 "data_size": 63488 00:17:28.657 }, 00:17:28.657 { 00:17:28.657 "name": "BaseBdev3", 00:17:28.657 "uuid": "2d564980-8744-58fd-8ade-575e6bf3661d", 00:17:28.657 "is_configured": true, 00:17:28.657 "data_offset": 2048, 00:17:28.657 "data_size": 63488 00:17:28.657 }, 00:17:28.657 { 00:17:28.657 "name": "BaseBdev4", 00:17:28.657 "uuid": "c42f48cc-8232-5244-a2f9-3fbf7f549556", 00:17:28.657 "is_configured": true, 00:17:28.657 "data_offset": 2048, 00:17:28.657 "data_size": 63488 00:17:28.657 } 00:17:28.657 ] 00:17:28.657 }' 00:17:28.657 09:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.657 09:16:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.225 09:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:29.225 09:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.225 09:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:29.225 09:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:29.225 09:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.225 09:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.225 09:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.225 09:16:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.225 09:16:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.225 09:16:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.226 09:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.226 "name": "raid_bdev1", 00:17:29.226 "uuid": "efe2f3fb-f993-42bb-9522-fec059ea6eb7", 00:17:29.226 "strip_size_kb": 64, 00:17:29.226 "state": "online", 00:17:29.226 "raid_level": "raid5f", 00:17:29.226 "superblock": true, 00:17:29.226 "num_base_bdevs": 4, 00:17:29.226 "num_base_bdevs_discovered": 3, 00:17:29.226 "num_base_bdevs_operational": 3, 00:17:29.226 "base_bdevs_list": [ 00:17:29.226 { 00:17:29.226 "name": null, 00:17:29.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.226 "is_configured": false, 00:17:29.226 "data_offset": 0, 00:17:29.226 "data_size": 63488 00:17:29.226 }, 00:17:29.226 { 00:17:29.226 "name": "BaseBdev2", 00:17:29.226 "uuid": "2182427d-4b62-5ae2-af2b-22934c4e9775", 00:17:29.226 "is_configured": true, 00:17:29.226 "data_offset": 2048, 00:17:29.226 "data_size": 63488 00:17:29.226 }, 00:17:29.226 { 00:17:29.226 "name": "BaseBdev3", 00:17:29.226 "uuid": "2d564980-8744-58fd-8ade-575e6bf3661d", 00:17:29.226 "is_configured": true, 00:17:29.226 "data_offset": 2048, 00:17:29.226 "data_size": 63488 00:17:29.226 }, 00:17:29.226 { 00:17:29.226 "name": "BaseBdev4", 00:17:29.226 "uuid": "c42f48cc-8232-5244-a2f9-3fbf7f549556", 00:17:29.226 "is_configured": true, 00:17:29.226 "data_offset": 2048, 00:17:29.226 "data_size": 63488 00:17:29.226 } 00:17:29.226 ] 00:17:29.226 }' 00:17:29.226 09:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.226 09:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:29.226 09:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.226 09:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:29.226 09:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:29.226 09:16:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:17:29.226 09:16:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:29.226 09:16:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:29.226 09:16:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:29.226 09:16:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:29.226 09:16:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:29.226 09:16:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:29.226 09:16:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.226 09:16:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.226 [2024-10-15 09:16:47.106538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:29.226 [2024-10-15 09:16:47.106781] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:29.226 [2024-10-15 09:16:47.106809] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:29.226 request: 00:17:29.226 { 00:17:29.226 "base_bdev": "BaseBdev1", 00:17:29.226 "raid_bdev": "raid_bdev1", 00:17:29.226 "method": "bdev_raid_add_base_bdev", 00:17:29.226 "req_id": 1 00:17:29.226 } 00:17:29.226 Got JSON-RPC error response 00:17:29.226 response: 00:17:29.226 { 00:17:29.226 "code": -22, 00:17:29.226 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:29.226 } 00:17:29.226 09:16:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:29.226 09:16:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:17:29.226 09:16:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:29.226 09:16:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:29.226 09:16:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:29.226 09:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:30.604 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:30.604 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.604 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.604 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:30.604 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:30.604 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:30.604 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.604 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.604 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.604 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.604 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.604 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.604 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.604 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.604 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.604 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.604 "name": "raid_bdev1", 00:17:30.604 "uuid": "efe2f3fb-f993-42bb-9522-fec059ea6eb7", 00:17:30.604 "strip_size_kb": 64, 00:17:30.604 "state": "online", 00:17:30.605 "raid_level": "raid5f", 00:17:30.605 "superblock": true, 00:17:30.605 "num_base_bdevs": 4, 00:17:30.605 "num_base_bdevs_discovered": 3, 00:17:30.605 "num_base_bdevs_operational": 3, 00:17:30.605 "base_bdevs_list": [ 00:17:30.605 { 00:17:30.605 "name": null, 00:17:30.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.605 "is_configured": false, 00:17:30.605 "data_offset": 0, 00:17:30.605 "data_size": 63488 00:17:30.605 }, 00:17:30.605 { 00:17:30.605 "name": "BaseBdev2", 00:17:30.605 "uuid": "2182427d-4b62-5ae2-af2b-22934c4e9775", 00:17:30.605 "is_configured": true, 00:17:30.605 "data_offset": 2048, 00:17:30.605 "data_size": 63488 00:17:30.605 }, 00:17:30.605 { 00:17:30.605 "name": "BaseBdev3", 00:17:30.605 "uuid": "2d564980-8744-58fd-8ade-575e6bf3661d", 00:17:30.605 "is_configured": true, 00:17:30.605 "data_offset": 2048, 00:17:30.605 "data_size": 63488 00:17:30.605 }, 00:17:30.605 { 00:17:30.605 "name": "BaseBdev4", 00:17:30.605 "uuid": "c42f48cc-8232-5244-a2f9-3fbf7f549556", 00:17:30.605 "is_configured": true, 00:17:30.605 "data_offset": 2048, 00:17:30.605 "data_size": 63488 00:17:30.605 } 00:17:30.605 ] 00:17:30.605 }' 00:17:30.605 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.605 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.864 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:30.864 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.864 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:30.864 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:30.864 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.864 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.864 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.864 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.864 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.864 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.864 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.864 "name": "raid_bdev1", 00:17:30.864 "uuid": "efe2f3fb-f993-42bb-9522-fec059ea6eb7", 00:17:30.864 "strip_size_kb": 64, 00:17:30.864 "state": "online", 00:17:30.864 "raid_level": "raid5f", 00:17:30.864 "superblock": true, 00:17:30.864 "num_base_bdevs": 4, 00:17:30.864 "num_base_bdevs_discovered": 3, 00:17:30.864 "num_base_bdevs_operational": 3, 00:17:30.864 "base_bdevs_list": [ 00:17:30.864 { 00:17:30.864 "name": null, 00:17:30.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.864 "is_configured": false, 00:17:30.864 "data_offset": 0, 00:17:30.864 "data_size": 63488 00:17:30.864 }, 00:17:30.864 { 00:17:30.864 "name": "BaseBdev2", 00:17:30.864 "uuid": "2182427d-4b62-5ae2-af2b-22934c4e9775", 00:17:30.864 "is_configured": true, 00:17:30.864 "data_offset": 2048, 00:17:30.864 "data_size": 63488 00:17:30.864 }, 00:17:30.864 { 00:17:30.864 "name": "BaseBdev3", 00:17:30.864 "uuid": "2d564980-8744-58fd-8ade-575e6bf3661d", 00:17:30.864 "is_configured": true, 00:17:30.864 "data_offset": 2048, 00:17:30.864 "data_size": 63488 00:17:30.864 }, 00:17:30.864 { 00:17:30.864 "name": "BaseBdev4", 00:17:30.864 "uuid": "c42f48cc-8232-5244-a2f9-3fbf7f549556", 00:17:30.864 "is_configured": true, 00:17:30.864 "data_offset": 2048, 00:17:30.864 "data_size": 63488 00:17:30.864 } 00:17:30.864 ] 00:17:30.864 }' 00:17:30.864 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.864 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:30.864 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.864 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:30.864 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85426 00:17:30.864 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 85426 ']' 00:17:30.864 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 85426 00:17:30.864 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:17:30.864 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:30.864 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85426 00:17:30.864 killing process with pid 85426 00:17:30.864 Received shutdown signal, test time was about 60.000000 seconds 00:17:30.864 00:17:30.864 Latency(us) 00:17:30.864 [2024-10-15T09:16:48.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.864 [2024-10-15T09:16:48.760Z] =================================================================================================================== 00:17:30.864 [2024-10-15T09:16:48.760Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:30.864 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:30.864 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:30.864 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85426' 00:17:30.864 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 85426 00:17:30.864 [2024-10-15 09:16:48.728251] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:30.864 09:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 85426 00:17:30.864 [2024-10-15 09:16:48.728400] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:30.864 [2024-10-15 09:16:48.728486] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:30.864 [2024-10-15 09:16:48.728500] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:31.432 [2024-10-15 09:16:49.244951] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:32.811 09:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:32.811 00:17:32.811 real 0m27.544s 00:17:32.811 user 0m34.784s 00:17:32.811 sys 0m3.113s 00:17:32.811 09:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:32.811 09:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.811 ************************************ 00:17:32.811 END TEST raid5f_rebuild_test_sb 00:17:32.811 ************************************ 00:17:32.811 09:16:50 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:17:32.811 09:16:50 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:32.811 09:16:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:32.811 09:16:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:32.811 09:16:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:32.811 ************************************ 00:17:32.811 START TEST raid_state_function_test_sb_4k 00:17:32.811 ************************************ 00:17:32.811 09:16:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:17:32.811 09:16:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:32.811 09:16:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:32.811 09:16:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:32.811 09:16:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:32.811 09:16:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:32.811 09:16:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:32.811 09:16:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:32.811 09:16:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:32.811 09:16:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:32.811 09:16:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:32.811 09:16:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:32.811 09:16:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:32.811 09:16:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:32.811 09:16:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:32.811 09:16:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:32.811 09:16:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:32.811 09:16:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:32.811 09:16:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:32.811 09:16:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:32.811 09:16:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:32.811 09:16:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:32.811 09:16:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:32.811 09:16:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86242 00:17:32.811 09:16:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:32.811 Process raid pid: 86242 00:17:32.811 09:16:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86242' 00:17:32.811 09:16:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86242 00:17:32.811 09:16:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 86242 ']' 00:17:32.811 09:16:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.811 09:16:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:32.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.811 09:16:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.811 09:16:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:32.811 09:16:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.811 [2024-10-15 09:16:50.561658] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:17:32.811 [2024-10-15 09:16:50.561797] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.097 [2024-10-15 09:16:50.727742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.097 [2024-10-15 09:16:50.863342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.357 [2024-10-15 09:16:51.081861] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:33.357 [2024-10-15 09:16:51.081909] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:33.617 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:33.617 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:17:33.617 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:33.617 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.617 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.617 [2024-10-15 09:16:51.420923] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:33.617 [2024-10-15 09:16:51.420983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:33.617 [2024-10-15 09:16:51.420994] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:33.617 [2024-10-15 09:16:51.421005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:33.617 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.617 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:33.617 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:33.617 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:33.617 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:33.617 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:33.617 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:33.617 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.617 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.617 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.617 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.617 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.617 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.617 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.617 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.617 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.617 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.617 "name": "Existed_Raid", 00:17:33.617 "uuid": "b0da00c1-74eb-4458-9a62-19bd375f2d4d", 00:17:33.617 "strip_size_kb": 0, 00:17:33.617 "state": "configuring", 00:17:33.617 "raid_level": "raid1", 00:17:33.617 "superblock": true, 00:17:33.617 "num_base_bdevs": 2, 00:17:33.617 "num_base_bdevs_discovered": 0, 00:17:33.617 "num_base_bdevs_operational": 2, 00:17:33.617 "base_bdevs_list": [ 00:17:33.617 { 00:17:33.617 "name": "BaseBdev1", 00:17:33.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.617 "is_configured": false, 00:17:33.617 "data_offset": 0, 00:17:33.617 "data_size": 0 00:17:33.617 }, 00:17:33.617 { 00:17:33.617 "name": "BaseBdev2", 00:17:33.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.617 "is_configured": false, 00:17:33.617 "data_offset": 0, 00:17:33.617 "data_size": 0 00:17:33.617 } 00:17:33.617 ] 00:17:33.617 }' 00:17:33.617 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.617 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.186 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:34.186 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.186 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.186 [2024-10-15 09:16:51.923997] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:34.186 [2024-10-15 09:16:51.924095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:34.186 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.186 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:34.186 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.186 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.186 [2024-10-15 09:16:51.935997] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:34.186 [2024-10-15 09:16:51.936084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:34.187 [2024-10-15 09:16:51.936118] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:34.187 [2024-10-15 09:16:51.936148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:34.187 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.187 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:17:34.187 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.187 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.187 [2024-10-15 09:16:51.986588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:34.187 BaseBdev1 00:17:34.187 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.187 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:34.187 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:34.187 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:34.187 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:17:34.187 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:34.187 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:34.187 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:34.187 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.187 09:16:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.187 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.187 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:34.187 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.187 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.187 [ 00:17:34.187 { 00:17:34.187 "name": "BaseBdev1", 00:17:34.187 "aliases": [ 00:17:34.187 "82699ad5-24fd-422d-bc92-2f96245346e1" 00:17:34.187 ], 00:17:34.187 "product_name": "Malloc disk", 00:17:34.187 "block_size": 4096, 00:17:34.187 "num_blocks": 8192, 00:17:34.187 "uuid": "82699ad5-24fd-422d-bc92-2f96245346e1", 00:17:34.187 "assigned_rate_limits": { 00:17:34.187 "rw_ios_per_sec": 0, 00:17:34.187 "rw_mbytes_per_sec": 0, 00:17:34.187 "r_mbytes_per_sec": 0, 00:17:34.187 "w_mbytes_per_sec": 0 00:17:34.187 }, 00:17:34.187 "claimed": true, 00:17:34.187 "claim_type": "exclusive_write", 00:17:34.187 "zoned": false, 00:17:34.187 "supported_io_types": { 00:17:34.187 "read": true, 00:17:34.187 "write": true, 00:17:34.187 "unmap": true, 00:17:34.187 "flush": true, 00:17:34.187 "reset": true, 00:17:34.187 "nvme_admin": false, 00:17:34.187 "nvme_io": false, 00:17:34.187 "nvme_io_md": false, 00:17:34.187 "write_zeroes": true, 00:17:34.187 "zcopy": true, 00:17:34.187 "get_zone_info": false, 00:17:34.187 "zone_management": false, 00:17:34.187 "zone_append": false, 00:17:34.187 "compare": false, 00:17:34.187 "compare_and_write": false, 00:17:34.187 "abort": true, 00:17:34.187 "seek_hole": false, 00:17:34.187 "seek_data": false, 00:17:34.187 "copy": true, 00:17:34.187 "nvme_iov_md": false 00:17:34.187 }, 00:17:34.187 "memory_domains": [ 00:17:34.187 { 00:17:34.187 "dma_device_id": "system", 00:17:34.187 "dma_device_type": 1 00:17:34.187 }, 00:17:34.187 { 00:17:34.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.187 "dma_device_type": 2 00:17:34.187 } 00:17:34.187 ], 00:17:34.187 "driver_specific": {} 00:17:34.187 } 00:17:34.187 ] 00:17:34.187 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.187 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:17:34.187 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:34.187 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:34.187 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:34.187 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.187 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.187 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:34.187 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.187 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.187 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.187 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.187 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.187 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.187 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.187 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.187 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.187 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.187 "name": "Existed_Raid", 00:17:34.187 "uuid": "dfd53ae7-bf91-4481-b600-7d0429c00f14", 00:17:34.187 "strip_size_kb": 0, 00:17:34.187 "state": "configuring", 00:17:34.187 "raid_level": "raid1", 00:17:34.187 "superblock": true, 00:17:34.187 "num_base_bdevs": 2, 00:17:34.187 "num_base_bdevs_discovered": 1, 00:17:34.187 "num_base_bdevs_operational": 2, 00:17:34.187 "base_bdevs_list": [ 00:17:34.187 { 00:17:34.187 "name": "BaseBdev1", 00:17:34.187 "uuid": "82699ad5-24fd-422d-bc92-2f96245346e1", 00:17:34.187 "is_configured": true, 00:17:34.187 "data_offset": 256, 00:17:34.187 "data_size": 7936 00:17:34.187 }, 00:17:34.187 { 00:17:34.187 "name": "BaseBdev2", 00:17:34.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.187 "is_configured": false, 00:17:34.187 "data_offset": 0, 00:17:34.187 "data_size": 0 00:17:34.187 } 00:17:34.187 ] 00:17:34.187 }' 00:17:34.187 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.187 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.756 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:34.756 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.756 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.756 [2024-10-15 09:16:52.521779] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:34.756 [2024-10-15 09:16:52.521840] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:34.756 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.756 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:34.756 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.756 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.756 [2024-10-15 09:16:52.529814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:34.756 [2024-10-15 09:16:52.531909] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:34.756 [2024-10-15 09:16:52.531953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:34.756 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.756 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:34.756 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:34.756 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:34.756 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:34.756 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:34.756 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.757 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.757 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:34.757 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.757 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.757 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.757 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.757 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.757 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.757 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.757 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.757 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.757 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.757 "name": "Existed_Raid", 00:17:34.757 "uuid": "4b31755e-81d1-4e2f-9dfb-f509a909bdea", 00:17:34.757 "strip_size_kb": 0, 00:17:34.757 "state": "configuring", 00:17:34.757 "raid_level": "raid1", 00:17:34.757 "superblock": true, 00:17:34.757 "num_base_bdevs": 2, 00:17:34.757 "num_base_bdevs_discovered": 1, 00:17:34.757 "num_base_bdevs_operational": 2, 00:17:34.757 "base_bdevs_list": [ 00:17:34.757 { 00:17:34.757 "name": "BaseBdev1", 00:17:34.757 "uuid": "82699ad5-24fd-422d-bc92-2f96245346e1", 00:17:34.757 "is_configured": true, 00:17:34.757 "data_offset": 256, 00:17:34.757 "data_size": 7936 00:17:34.757 }, 00:17:34.757 { 00:17:34.757 "name": "BaseBdev2", 00:17:34.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.757 "is_configured": false, 00:17:34.757 "data_offset": 0, 00:17:34.757 "data_size": 0 00:17:34.757 } 00:17:34.757 ] 00:17:34.757 }' 00:17:34.757 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.757 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.326 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:17:35.326 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.326 09:16:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.326 [2024-10-15 09:16:53.036827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:35.326 [2024-10-15 09:16:53.037218] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:35.326 [2024-10-15 09:16:53.037276] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:35.326 [2024-10-15 09:16:53.037613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:35.326 [2024-10-15 09:16:53.037839] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:35.326 [2024-10-15 09:16:53.037892] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev2 00:17:35.326 id_bdev 0x617000007e80 00:17:35.326 [2024-10-15 09:16:53.038102] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.326 [ 00:17:35.326 { 00:17:35.326 "name": "BaseBdev2", 00:17:35.326 "aliases": [ 00:17:35.326 "3e5ba43b-d491-4150-a46f-b9610ec9cfea" 00:17:35.326 ], 00:17:35.326 "product_name": "Malloc disk", 00:17:35.326 "block_size": 4096, 00:17:35.326 "num_blocks": 8192, 00:17:35.326 "uuid": "3e5ba43b-d491-4150-a46f-b9610ec9cfea", 00:17:35.326 "assigned_rate_limits": { 00:17:35.326 "rw_ios_per_sec": 0, 00:17:35.326 "rw_mbytes_per_sec": 0, 00:17:35.326 "r_mbytes_per_sec": 0, 00:17:35.326 "w_mbytes_per_sec": 0 00:17:35.326 }, 00:17:35.326 "claimed": true, 00:17:35.326 "claim_type": "exclusive_write", 00:17:35.326 "zoned": false, 00:17:35.326 "supported_io_types": { 00:17:35.326 "read": true, 00:17:35.326 "write": true, 00:17:35.326 "unmap": true, 00:17:35.326 "flush": true, 00:17:35.326 "reset": true, 00:17:35.326 "nvme_admin": false, 00:17:35.326 "nvme_io": false, 00:17:35.326 "nvme_io_md": false, 00:17:35.326 "write_zeroes": true, 00:17:35.326 "zcopy": true, 00:17:35.326 "get_zone_info": false, 00:17:35.326 "zone_management": false, 00:17:35.326 "zone_append": false, 00:17:35.326 "compare": false, 00:17:35.326 "compare_and_write": false, 00:17:35.326 "abort": true, 00:17:35.326 "seek_hole": false, 00:17:35.326 "seek_data": false, 00:17:35.326 "copy": true, 00:17:35.326 "nvme_iov_md": false 00:17:35.326 }, 00:17:35.326 "memory_domains": [ 00:17:35.326 { 00:17:35.326 "dma_device_id": "system", 00:17:35.326 "dma_device_type": 1 00:17:35.326 }, 00:17:35.326 { 00:17:35.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.326 "dma_device_type": 2 00:17:35.326 } 00:17:35.326 ], 00:17:35.326 "driver_specific": {} 00:17:35.326 } 00:17:35.326 ] 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.326 "name": "Existed_Raid", 00:17:35.326 "uuid": "4b31755e-81d1-4e2f-9dfb-f509a909bdea", 00:17:35.326 "strip_size_kb": 0, 00:17:35.326 "state": "online", 00:17:35.326 "raid_level": "raid1", 00:17:35.326 "superblock": true, 00:17:35.326 "num_base_bdevs": 2, 00:17:35.326 "num_base_bdevs_discovered": 2, 00:17:35.326 "num_base_bdevs_operational": 2, 00:17:35.326 "base_bdevs_list": [ 00:17:35.326 { 00:17:35.326 "name": "BaseBdev1", 00:17:35.326 "uuid": "82699ad5-24fd-422d-bc92-2f96245346e1", 00:17:35.326 "is_configured": true, 00:17:35.326 "data_offset": 256, 00:17:35.326 "data_size": 7936 00:17:35.326 }, 00:17:35.326 { 00:17:35.326 "name": "BaseBdev2", 00:17:35.326 "uuid": "3e5ba43b-d491-4150-a46f-b9610ec9cfea", 00:17:35.326 "is_configured": true, 00:17:35.326 "data_offset": 256, 00:17:35.326 "data_size": 7936 00:17:35.326 } 00:17:35.326 ] 00:17:35.326 }' 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.326 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.895 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:35.895 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:35.895 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:35.895 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:35.895 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:35.895 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:35.896 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:35.896 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:35.896 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.896 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.896 [2024-10-15 09:16:53.532431] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:35.896 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.896 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:35.896 "name": "Existed_Raid", 00:17:35.896 "aliases": [ 00:17:35.896 "4b31755e-81d1-4e2f-9dfb-f509a909bdea" 00:17:35.896 ], 00:17:35.896 "product_name": "Raid Volume", 00:17:35.896 "block_size": 4096, 00:17:35.896 "num_blocks": 7936, 00:17:35.896 "uuid": "4b31755e-81d1-4e2f-9dfb-f509a909bdea", 00:17:35.896 "assigned_rate_limits": { 00:17:35.896 "rw_ios_per_sec": 0, 00:17:35.896 "rw_mbytes_per_sec": 0, 00:17:35.896 "r_mbytes_per_sec": 0, 00:17:35.896 "w_mbytes_per_sec": 0 00:17:35.896 }, 00:17:35.896 "claimed": false, 00:17:35.896 "zoned": false, 00:17:35.896 "supported_io_types": { 00:17:35.896 "read": true, 00:17:35.896 "write": true, 00:17:35.896 "unmap": false, 00:17:35.896 "flush": false, 00:17:35.896 "reset": true, 00:17:35.896 "nvme_admin": false, 00:17:35.896 "nvme_io": false, 00:17:35.896 "nvme_io_md": false, 00:17:35.896 "write_zeroes": true, 00:17:35.896 "zcopy": false, 00:17:35.896 "get_zone_info": false, 00:17:35.896 "zone_management": false, 00:17:35.896 "zone_append": false, 00:17:35.896 "compare": false, 00:17:35.896 "compare_and_write": false, 00:17:35.896 "abort": false, 00:17:35.896 "seek_hole": false, 00:17:35.896 "seek_data": false, 00:17:35.896 "copy": false, 00:17:35.896 "nvme_iov_md": false 00:17:35.896 }, 00:17:35.896 "memory_domains": [ 00:17:35.896 { 00:17:35.896 "dma_device_id": "system", 00:17:35.896 "dma_device_type": 1 00:17:35.896 }, 00:17:35.896 { 00:17:35.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.896 "dma_device_type": 2 00:17:35.896 }, 00:17:35.896 { 00:17:35.896 "dma_device_id": "system", 00:17:35.896 "dma_device_type": 1 00:17:35.896 }, 00:17:35.896 { 00:17:35.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.896 "dma_device_type": 2 00:17:35.896 } 00:17:35.896 ], 00:17:35.896 "driver_specific": { 00:17:35.896 "raid": { 00:17:35.896 "uuid": "4b31755e-81d1-4e2f-9dfb-f509a909bdea", 00:17:35.896 "strip_size_kb": 0, 00:17:35.896 "state": "online", 00:17:35.896 "raid_level": "raid1", 00:17:35.896 "superblock": true, 00:17:35.896 "num_base_bdevs": 2, 00:17:35.896 "num_base_bdevs_discovered": 2, 00:17:35.896 "num_base_bdevs_operational": 2, 00:17:35.896 "base_bdevs_list": [ 00:17:35.896 { 00:17:35.896 "name": "BaseBdev1", 00:17:35.896 "uuid": "82699ad5-24fd-422d-bc92-2f96245346e1", 00:17:35.896 "is_configured": true, 00:17:35.896 "data_offset": 256, 00:17:35.896 "data_size": 7936 00:17:35.896 }, 00:17:35.896 { 00:17:35.896 "name": "BaseBdev2", 00:17:35.896 "uuid": "3e5ba43b-d491-4150-a46f-b9610ec9cfea", 00:17:35.896 "is_configured": true, 00:17:35.896 "data_offset": 256, 00:17:35.896 "data_size": 7936 00:17:35.896 } 00:17:35.896 ] 00:17:35.896 } 00:17:35.896 } 00:17:35.896 }' 00:17:35.896 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:35.896 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:35.896 BaseBdev2' 00:17:35.896 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:35.896 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:35.896 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:35.896 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:35.896 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.896 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.896 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:35.896 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.896 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:35.896 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:35.896 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:35.896 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:35.896 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.896 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.896 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:35.896 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.896 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:35.896 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:35.896 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:35.896 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.896 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.896 [2024-10-15 09:16:53.767817] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:36.156 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.156 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:36.156 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:36.156 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:36.156 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:36.156 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:36.156 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:36.156 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:36.156 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.156 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.156 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.156 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:36.156 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.156 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.156 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.156 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.156 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.156 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.156 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.156 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.156 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.156 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.156 "name": "Existed_Raid", 00:17:36.156 "uuid": "4b31755e-81d1-4e2f-9dfb-f509a909bdea", 00:17:36.156 "strip_size_kb": 0, 00:17:36.156 "state": "online", 00:17:36.156 "raid_level": "raid1", 00:17:36.156 "superblock": true, 00:17:36.156 "num_base_bdevs": 2, 00:17:36.156 "num_base_bdevs_discovered": 1, 00:17:36.156 "num_base_bdevs_operational": 1, 00:17:36.156 "base_bdevs_list": [ 00:17:36.156 { 00:17:36.156 "name": null, 00:17:36.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.156 "is_configured": false, 00:17:36.156 "data_offset": 0, 00:17:36.156 "data_size": 7936 00:17:36.156 }, 00:17:36.156 { 00:17:36.156 "name": "BaseBdev2", 00:17:36.156 "uuid": "3e5ba43b-d491-4150-a46f-b9610ec9cfea", 00:17:36.156 "is_configured": true, 00:17:36.156 "data_offset": 256, 00:17:36.156 "data_size": 7936 00:17:36.156 } 00:17:36.156 ] 00:17:36.156 }' 00:17:36.156 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.156 09:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.726 09:16:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:36.726 09:16:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:36.726 09:16:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.726 09:16:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.726 09:16:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.726 09:16:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:36.726 09:16:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.726 09:16:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:36.726 09:16:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:36.726 09:16:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:36.726 09:16:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.726 09:16:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.726 [2024-10-15 09:16:54.431077] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:36.726 [2024-10-15 09:16:54.431212] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:36.726 [2024-10-15 09:16:54.540610] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:36.726 [2024-10-15 09:16:54.540782] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:36.726 [2024-10-15 09:16:54.540838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:36.726 09:16:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.726 09:16:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:36.726 09:16:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:36.726 09:16:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.726 09:16:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:36.726 09:16:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.726 09:16:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.726 09:16:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.726 09:16:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:36.726 09:16:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:36.726 09:16:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:36.726 09:16:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86242 00:17:36.726 09:16:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 86242 ']' 00:17:36.726 09:16:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 86242 00:17:36.726 09:16:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:17:36.726 09:16:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:36.726 09:16:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86242 00:17:36.985 09:16:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:36.985 09:16:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:36.985 09:16:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86242' 00:17:36.985 killing process with pid 86242 00:17:36.985 09:16:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 86242 00:17:36.985 [2024-10-15 09:16:54.642377] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:36.985 09:16:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 86242 00:17:36.985 [2024-10-15 09:16:54.662335] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:37.988 09:16:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:17:37.988 00:17:37.988 real 0m5.396s 00:17:37.988 user 0m7.805s 00:17:37.988 sys 0m0.917s 00:17:37.988 09:16:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:37.988 09:16:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.988 ************************************ 00:17:37.988 END TEST raid_state_function_test_sb_4k 00:17:37.988 ************************************ 00:17:38.249 09:16:55 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:17:38.249 09:16:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:38.249 09:16:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:38.249 09:16:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:38.249 ************************************ 00:17:38.249 START TEST raid_superblock_test_4k 00:17:38.249 ************************************ 00:17:38.249 09:16:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:17:38.249 09:16:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:38.249 09:16:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:38.249 09:16:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:38.249 09:16:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:38.249 09:16:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:38.249 09:16:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:38.249 09:16:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:38.249 09:16:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:38.249 09:16:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:38.249 09:16:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:38.249 09:16:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:38.249 09:16:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:38.249 09:16:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:38.249 09:16:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:38.249 09:16:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:38.249 09:16:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86494 00:17:38.249 09:16:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86494 00:17:38.249 09:16:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:38.249 09:16:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 86494 ']' 00:17:38.249 09:16:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.249 09:16:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:38.249 09:16:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.249 09:16:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:38.249 09:16:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.249 [2024-10-15 09:16:56.012952] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:17:38.249 [2024-10-15 09:16:56.013152] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86494 ] 00:17:38.509 [2024-10-15 09:16:56.166057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.509 [2024-10-15 09:16:56.283552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.769 [2024-10-15 09:16:56.491731] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:38.769 [2024-10-15 09:16:56.491894] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:39.339 09:16:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:39.339 09:16:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:17:39.339 09:16:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:39.340 09:16:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:39.340 09:16:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:39.340 09:16:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:39.340 09:16:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:39.340 09:16:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:39.340 09:16:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:39.340 09:16:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:39.340 09:16:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:17:39.340 09:16:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.340 09:16:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.340 malloc1 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.340 [2024-10-15 09:16:57.019481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:39.340 [2024-10-15 09:16:57.019602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.340 [2024-10-15 09:16:57.019647] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:39.340 [2024-10-15 09:16:57.019677] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.340 [2024-10-15 09:16:57.022052] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.340 [2024-10-15 09:16:57.022134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:39.340 pt1 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.340 malloc2 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.340 [2024-10-15 09:16:57.081303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:39.340 [2024-10-15 09:16:57.081379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.340 [2024-10-15 09:16:57.081403] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:39.340 [2024-10-15 09:16:57.081412] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.340 [2024-10-15 09:16:57.083692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.340 [2024-10-15 09:16:57.083813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:39.340 pt2 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.340 [2024-10-15 09:16:57.093352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:39.340 [2024-10-15 09:16:57.095261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:39.340 [2024-10-15 09:16:57.095514] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:39.340 [2024-10-15 09:16:57.095531] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:39.340 [2024-10-15 09:16:57.095816] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:39.340 [2024-10-15 09:16:57.095989] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:39.340 [2024-10-15 09:16:57.096002] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:39.340 [2024-10-15 09:16:57.096189] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.340 "name": "raid_bdev1", 00:17:39.340 "uuid": "d3d6fd6f-cef5-470f-a505-ad3a7b469a29", 00:17:39.340 "strip_size_kb": 0, 00:17:39.340 "state": "online", 00:17:39.340 "raid_level": "raid1", 00:17:39.340 "superblock": true, 00:17:39.340 "num_base_bdevs": 2, 00:17:39.340 "num_base_bdevs_discovered": 2, 00:17:39.340 "num_base_bdevs_operational": 2, 00:17:39.340 "base_bdevs_list": [ 00:17:39.340 { 00:17:39.340 "name": "pt1", 00:17:39.340 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:39.340 "is_configured": true, 00:17:39.340 "data_offset": 256, 00:17:39.340 "data_size": 7936 00:17:39.340 }, 00:17:39.340 { 00:17:39.340 "name": "pt2", 00:17:39.340 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:39.340 "is_configured": true, 00:17:39.340 "data_offset": 256, 00:17:39.340 "data_size": 7936 00:17:39.340 } 00:17:39.340 ] 00:17:39.340 }' 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.340 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.910 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:39.910 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:39.910 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:39.910 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:39.910 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:39.910 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:39.910 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:39.910 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:39.910 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.910 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.910 [2024-10-15 09:16:57.552803] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:39.910 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.910 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:39.910 "name": "raid_bdev1", 00:17:39.910 "aliases": [ 00:17:39.910 "d3d6fd6f-cef5-470f-a505-ad3a7b469a29" 00:17:39.910 ], 00:17:39.911 "product_name": "Raid Volume", 00:17:39.911 "block_size": 4096, 00:17:39.911 "num_blocks": 7936, 00:17:39.911 "uuid": "d3d6fd6f-cef5-470f-a505-ad3a7b469a29", 00:17:39.911 "assigned_rate_limits": { 00:17:39.911 "rw_ios_per_sec": 0, 00:17:39.911 "rw_mbytes_per_sec": 0, 00:17:39.911 "r_mbytes_per_sec": 0, 00:17:39.911 "w_mbytes_per_sec": 0 00:17:39.911 }, 00:17:39.911 "claimed": false, 00:17:39.911 "zoned": false, 00:17:39.911 "supported_io_types": { 00:17:39.911 "read": true, 00:17:39.911 "write": true, 00:17:39.911 "unmap": false, 00:17:39.911 "flush": false, 00:17:39.911 "reset": true, 00:17:39.911 "nvme_admin": false, 00:17:39.911 "nvme_io": false, 00:17:39.911 "nvme_io_md": false, 00:17:39.911 "write_zeroes": true, 00:17:39.911 "zcopy": false, 00:17:39.911 "get_zone_info": false, 00:17:39.911 "zone_management": false, 00:17:39.911 "zone_append": false, 00:17:39.911 "compare": false, 00:17:39.911 "compare_and_write": false, 00:17:39.911 "abort": false, 00:17:39.911 "seek_hole": false, 00:17:39.911 "seek_data": false, 00:17:39.911 "copy": false, 00:17:39.911 "nvme_iov_md": false 00:17:39.911 }, 00:17:39.911 "memory_domains": [ 00:17:39.911 { 00:17:39.911 "dma_device_id": "system", 00:17:39.911 "dma_device_type": 1 00:17:39.911 }, 00:17:39.911 { 00:17:39.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.911 "dma_device_type": 2 00:17:39.911 }, 00:17:39.911 { 00:17:39.911 "dma_device_id": "system", 00:17:39.911 "dma_device_type": 1 00:17:39.911 }, 00:17:39.911 { 00:17:39.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.911 "dma_device_type": 2 00:17:39.911 } 00:17:39.911 ], 00:17:39.911 "driver_specific": { 00:17:39.911 "raid": { 00:17:39.911 "uuid": "d3d6fd6f-cef5-470f-a505-ad3a7b469a29", 00:17:39.911 "strip_size_kb": 0, 00:17:39.911 "state": "online", 00:17:39.911 "raid_level": "raid1", 00:17:39.911 "superblock": true, 00:17:39.911 "num_base_bdevs": 2, 00:17:39.911 "num_base_bdevs_discovered": 2, 00:17:39.911 "num_base_bdevs_operational": 2, 00:17:39.911 "base_bdevs_list": [ 00:17:39.911 { 00:17:39.911 "name": "pt1", 00:17:39.911 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:39.911 "is_configured": true, 00:17:39.911 "data_offset": 256, 00:17:39.911 "data_size": 7936 00:17:39.911 }, 00:17:39.911 { 00:17:39.911 "name": "pt2", 00:17:39.911 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:39.911 "is_configured": true, 00:17:39.911 "data_offset": 256, 00:17:39.911 "data_size": 7936 00:17:39.911 } 00:17:39.911 ] 00:17:39.911 } 00:17:39.911 } 00:17:39.911 }' 00:17:39.911 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:39.911 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:39.911 pt2' 00:17:39.911 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:39.911 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:39.911 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:39.911 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:39.911 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:39.911 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.911 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.911 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.911 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:39.911 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:39.911 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:39.911 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:39.911 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:39.911 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.911 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.911 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.911 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:39.911 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:39.911 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:39.911 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.911 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.911 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:39.911 [2024-10-15 09:16:57.764380] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:39.911 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.911 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d3d6fd6f-cef5-470f-a505-ad3a7b469a29 00:17:39.911 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z d3d6fd6f-cef5-470f-a505-ad3a7b469a29 ']' 00:17:39.911 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:39.911 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.911 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.171 [2024-10-15 09:16:57.808044] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:40.171 [2024-10-15 09:16:57.808112] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:40.171 [2024-10-15 09:16:57.808226] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:40.171 [2024-10-15 09:16:57.808310] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:40.171 [2024-10-15 09:16:57.808362] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.171 [2024-10-15 09:16:57.947846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:40.171 [2024-10-15 09:16:57.949857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:40.171 [2024-10-15 09:16:57.949977] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:40.171 [2024-10-15 09:16:57.950079] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:40.171 [2024-10-15 09:16:57.950129] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:40.171 [2024-10-15 09:16:57.950157] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:40.171 request: 00:17:40.171 { 00:17:40.171 "name": "raid_bdev1", 00:17:40.171 "raid_level": "raid1", 00:17:40.171 "base_bdevs": [ 00:17:40.171 "malloc1", 00:17:40.171 "malloc2" 00:17:40.171 ], 00:17:40.171 "superblock": false, 00:17:40.171 "method": "bdev_raid_create", 00:17:40.171 "req_id": 1 00:17:40.171 } 00:17:40.171 Got JSON-RPC error response 00:17:40.171 response: 00:17:40.171 { 00:17:40.171 "code": -17, 00:17:40.171 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:40.171 } 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:40.171 09:16:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.171 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:40.171 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:40.171 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:40.171 09:16:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.171 09:16:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.171 [2024-10-15 09:16:58.011658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:40.171 [2024-10-15 09:16:58.011759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:40.171 [2024-10-15 09:16:58.011794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:40.171 [2024-10-15 09:16:58.011824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:40.171 [2024-10-15 09:16:58.014237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:40.171 [2024-10-15 09:16:58.014325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:40.171 [2024-10-15 09:16:58.014446] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:40.171 [2024-10-15 09:16:58.014550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:40.171 pt1 00:17:40.171 09:16:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.171 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:40.171 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:40.171 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:40.171 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:40.171 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:40.171 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:40.171 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.171 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.171 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.171 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.171 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.171 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.171 09:16:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.171 09:16:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.171 09:16:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.171 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.171 "name": "raid_bdev1", 00:17:40.171 "uuid": "d3d6fd6f-cef5-470f-a505-ad3a7b469a29", 00:17:40.171 "strip_size_kb": 0, 00:17:40.171 "state": "configuring", 00:17:40.171 "raid_level": "raid1", 00:17:40.171 "superblock": true, 00:17:40.171 "num_base_bdevs": 2, 00:17:40.171 "num_base_bdevs_discovered": 1, 00:17:40.171 "num_base_bdevs_operational": 2, 00:17:40.171 "base_bdevs_list": [ 00:17:40.171 { 00:17:40.171 "name": "pt1", 00:17:40.171 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:40.171 "is_configured": true, 00:17:40.171 "data_offset": 256, 00:17:40.171 "data_size": 7936 00:17:40.171 }, 00:17:40.171 { 00:17:40.171 "name": null, 00:17:40.171 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:40.171 "is_configured": false, 00:17:40.171 "data_offset": 256, 00:17:40.171 "data_size": 7936 00:17:40.171 } 00:17:40.171 ] 00:17:40.171 }' 00:17:40.171 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.171 09:16:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.741 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:40.741 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:40.741 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:40.741 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:40.741 09:16:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.741 09:16:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.741 [2024-10-15 09:16:58.454921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:40.741 [2024-10-15 09:16:58.455051] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:40.741 [2024-10-15 09:16:58.455094] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:40.741 [2024-10-15 09:16:58.455126] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:40.741 [2024-10-15 09:16:58.455667] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:40.741 [2024-10-15 09:16:58.455741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:40.741 [2024-10-15 09:16:58.455862] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:40.741 [2024-10-15 09:16:58.455917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:40.741 [2024-10-15 09:16:58.456071] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:40.741 [2024-10-15 09:16:58.456111] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:40.741 [2024-10-15 09:16:58.456368] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:40.741 [2024-10-15 09:16:58.456570] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:40.741 [2024-10-15 09:16:58.456585] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:40.741 [2024-10-15 09:16:58.456760] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:40.741 pt2 00:17:40.741 09:16:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.741 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:40.741 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:40.741 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:40.741 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:40.741 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.741 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:40.741 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:40.741 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:40.741 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.741 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.741 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.741 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.741 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.741 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.741 09:16:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.741 09:16:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.741 09:16:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.741 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.741 "name": "raid_bdev1", 00:17:40.741 "uuid": "d3d6fd6f-cef5-470f-a505-ad3a7b469a29", 00:17:40.741 "strip_size_kb": 0, 00:17:40.741 "state": "online", 00:17:40.741 "raid_level": "raid1", 00:17:40.741 "superblock": true, 00:17:40.741 "num_base_bdevs": 2, 00:17:40.741 "num_base_bdevs_discovered": 2, 00:17:40.741 "num_base_bdevs_operational": 2, 00:17:40.741 "base_bdevs_list": [ 00:17:40.741 { 00:17:40.741 "name": "pt1", 00:17:40.741 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:40.741 "is_configured": true, 00:17:40.741 "data_offset": 256, 00:17:40.741 "data_size": 7936 00:17:40.741 }, 00:17:40.741 { 00:17:40.741 "name": "pt2", 00:17:40.741 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:40.741 "is_configured": true, 00:17:40.741 "data_offset": 256, 00:17:40.741 "data_size": 7936 00:17:40.741 } 00:17:40.741 ] 00:17:40.741 }' 00:17:40.741 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.741 09:16:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.001 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:41.001 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:41.001 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:41.001 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:41.001 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:41.001 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:41.001 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:41.001 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:41.001 09:16:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.001 09:16:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.001 [2024-10-15 09:16:58.882537] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:41.262 09:16:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.262 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:41.262 "name": "raid_bdev1", 00:17:41.262 "aliases": [ 00:17:41.262 "d3d6fd6f-cef5-470f-a505-ad3a7b469a29" 00:17:41.262 ], 00:17:41.262 "product_name": "Raid Volume", 00:17:41.262 "block_size": 4096, 00:17:41.262 "num_blocks": 7936, 00:17:41.262 "uuid": "d3d6fd6f-cef5-470f-a505-ad3a7b469a29", 00:17:41.262 "assigned_rate_limits": { 00:17:41.262 "rw_ios_per_sec": 0, 00:17:41.262 "rw_mbytes_per_sec": 0, 00:17:41.262 "r_mbytes_per_sec": 0, 00:17:41.262 "w_mbytes_per_sec": 0 00:17:41.262 }, 00:17:41.262 "claimed": false, 00:17:41.262 "zoned": false, 00:17:41.262 "supported_io_types": { 00:17:41.262 "read": true, 00:17:41.262 "write": true, 00:17:41.262 "unmap": false, 00:17:41.262 "flush": false, 00:17:41.262 "reset": true, 00:17:41.262 "nvme_admin": false, 00:17:41.262 "nvme_io": false, 00:17:41.262 "nvme_io_md": false, 00:17:41.262 "write_zeroes": true, 00:17:41.262 "zcopy": false, 00:17:41.262 "get_zone_info": false, 00:17:41.262 "zone_management": false, 00:17:41.262 "zone_append": false, 00:17:41.262 "compare": false, 00:17:41.262 "compare_and_write": false, 00:17:41.262 "abort": false, 00:17:41.262 "seek_hole": false, 00:17:41.262 "seek_data": false, 00:17:41.262 "copy": false, 00:17:41.262 "nvme_iov_md": false 00:17:41.262 }, 00:17:41.262 "memory_domains": [ 00:17:41.262 { 00:17:41.262 "dma_device_id": "system", 00:17:41.262 "dma_device_type": 1 00:17:41.262 }, 00:17:41.262 { 00:17:41.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.262 "dma_device_type": 2 00:17:41.262 }, 00:17:41.262 { 00:17:41.262 "dma_device_id": "system", 00:17:41.262 "dma_device_type": 1 00:17:41.262 }, 00:17:41.262 { 00:17:41.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.262 "dma_device_type": 2 00:17:41.262 } 00:17:41.262 ], 00:17:41.262 "driver_specific": { 00:17:41.262 "raid": { 00:17:41.262 "uuid": "d3d6fd6f-cef5-470f-a505-ad3a7b469a29", 00:17:41.262 "strip_size_kb": 0, 00:17:41.262 "state": "online", 00:17:41.262 "raid_level": "raid1", 00:17:41.262 "superblock": true, 00:17:41.262 "num_base_bdevs": 2, 00:17:41.262 "num_base_bdevs_discovered": 2, 00:17:41.262 "num_base_bdevs_operational": 2, 00:17:41.262 "base_bdevs_list": [ 00:17:41.262 { 00:17:41.262 "name": "pt1", 00:17:41.262 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:41.262 "is_configured": true, 00:17:41.262 "data_offset": 256, 00:17:41.262 "data_size": 7936 00:17:41.262 }, 00:17:41.262 { 00:17:41.262 "name": "pt2", 00:17:41.262 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:41.262 "is_configured": true, 00:17:41.262 "data_offset": 256, 00:17:41.262 "data_size": 7936 00:17:41.262 } 00:17:41.262 ] 00:17:41.262 } 00:17:41.262 } 00:17:41.262 }' 00:17:41.262 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:41.262 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:41.262 pt2' 00:17:41.262 09:16:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:41.262 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:41.262 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:41.262 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:41.262 09:16:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.262 09:16:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.262 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:41.262 09:16:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.262 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:41.262 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:41.262 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:41.262 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:41.262 09:16:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.262 09:16:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.262 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:41.262 09:16:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.262 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:41.262 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:41.262 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:41.262 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:41.262 09:16:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.262 09:16:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.262 [2024-10-15 09:16:59.122112] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:41.262 09:16:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.522 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' d3d6fd6f-cef5-470f-a505-ad3a7b469a29 '!=' d3d6fd6f-cef5-470f-a505-ad3a7b469a29 ']' 00:17:41.522 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:41.522 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:41.522 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:41.522 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:41.522 09:16:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.522 09:16:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.522 [2024-10-15 09:16:59.165871] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:41.522 09:16:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.522 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:41.522 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.522 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.522 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:41.522 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:41.522 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:41.522 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.522 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.522 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.522 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.522 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.522 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.522 09:16:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.522 09:16:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.522 09:16:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.522 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.522 "name": "raid_bdev1", 00:17:41.522 "uuid": "d3d6fd6f-cef5-470f-a505-ad3a7b469a29", 00:17:41.522 "strip_size_kb": 0, 00:17:41.522 "state": "online", 00:17:41.522 "raid_level": "raid1", 00:17:41.522 "superblock": true, 00:17:41.522 "num_base_bdevs": 2, 00:17:41.522 "num_base_bdevs_discovered": 1, 00:17:41.522 "num_base_bdevs_operational": 1, 00:17:41.522 "base_bdevs_list": [ 00:17:41.522 { 00:17:41.522 "name": null, 00:17:41.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.522 "is_configured": false, 00:17:41.522 "data_offset": 0, 00:17:41.522 "data_size": 7936 00:17:41.522 }, 00:17:41.522 { 00:17:41.522 "name": "pt2", 00:17:41.522 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:41.522 "is_configured": true, 00:17:41.522 "data_offset": 256, 00:17:41.522 "data_size": 7936 00:17:41.522 } 00:17:41.522 ] 00:17:41.522 }' 00:17:41.522 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.522 09:16:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.782 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:41.782 09:16:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.782 09:16:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.782 [2024-10-15 09:16:59.637023] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:41.782 [2024-10-15 09:16:59.637054] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:41.782 [2024-10-15 09:16:59.637149] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:41.782 [2024-10-15 09:16:59.637199] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:41.782 [2024-10-15 09:16:59.637212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:41.782 09:16:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.782 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:41.782 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.782 09:16:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.782 09:16:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.782 09:16:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.042 [2024-10-15 09:16:59.712889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:42.042 [2024-10-15 09:16:59.712994] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.042 [2024-10-15 09:16:59.713015] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:42.042 [2024-10-15 09:16:59.713025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.042 [2024-10-15 09:16:59.715423] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.042 [2024-10-15 09:16:59.715464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:42.042 [2024-10-15 09:16:59.715551] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:42.042 [2024-10-15 09:16:59.715600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:42.042 [2024-10-15 09:16:59.715724] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:42.042 [2024-10-15 09:16:59.715736] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:42.042 [2024-10-15 09:16:59.715959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:42.042 [2024-10-15 09:16:59.716131] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:42.042 [2024-10-15 09:16:59.716140] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:42.042 [2024-10-15 09:16:59.716292] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.042 pt2 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.042 "name": "raid_bdev1", 00:17:42.042 "uuid": "d3d6fd6f-cef5-470f-a505-ad3a7b469a29", 00:17:42.042 "strip_size_kb": 0, 00:17:42.042 "state": "online", 00:17:42.042 "raid_level": "raid1", 00:17:42.042 "superblock": true, 00:17:42.042 "num_base_bdevs": 2, 00:17:42.042 "num_base_bdevs_discovered": 1, 00:17:42.042 "num_base_bdevs_operational": 1, 00:17:42.042 "base_bdevs_list": [ 00:17:42.042 { 00:17:42.042 "name": null, 00:17:42.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.042 "is_configured": false, 00:17:42.042 "data_offset": 256, 00:17:42.042 "data_size": 7936 00:17:42.042 }, 00:17:42.042 { 00:17:42.042 "name": "pt2", 00:17:42.042 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:42.042 "is_configured": true, 00:17:42.042 "data_offset": 256, 00:17:42.042 "data_size": 7936 00:17:42.042 } 00:17:42.042 ] 00:17:42.042 }' 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.042 09:16:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.301 09:17:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:42.301 09:17:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.301 09:17:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.301 [2024-10-15 09:17:00.132164] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:42.301 [2024-10-15 09:17:00.132264] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:42.301 [2024-10-15 09:17:00.132408] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:42.301 [2024-10-15 09:17:00.132494] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:42.301 [2024-10-15 09:17:00.132544] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:42.301 09:17:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.301 09:17:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.301 09:17:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:42.301 09:17:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.301 09:17:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.301 09:17:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.301 09:17:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:42.301 09:17:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:42.301 09:17:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:42.301 09:17:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:42.301 09:17:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.301 09:17:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.301 [2024-10-15 09:17:00.196090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:42.301 [2024-10-15 09:17:00.196214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.301 [2024-10-15 09:17:00.196258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:42.301 [2024-10-15 09:17:00.196301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.561 [2024-10-15 09:17:00.198821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.561 [2024-10-15 09:17:00.198910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:42.561 [2024-10-15 09:17:00.199037] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:42.561 [2024-10-15 09:17:00.199123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:42.561 [2024-10-15 09:17:00.199324] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:42.561 [2024-10-15 09:17:00.199382] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:42.561 [2024-10-15 09:17:00.199426] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:42.561 [2024-10-15 09:17:00.199544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:42.561 [2024-10-15 09:17:00.199674] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:42.561 [2024-10-15 09:17:00.199729] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:42.561 [2024-10-15 09:17:00.200018] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:42.561 [2024-10-15 09:17:00.200237] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:42.561 pt1 00:17:42.561 [2024-10-15 09:17:00.200287] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:42.561 [2024-10-15 09:17:00.200497] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.561 09:17:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.561 09:17:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:42.561 09:17:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:42.561 09:17:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.561 09:17:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.561 09:17:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:42.561 09:17:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:42.561 09:17:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:42.561 09:17:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.561 09:17:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.561 09:17:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.561 09:17:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.561 09:17:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.561 09:17:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.561 09:17:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.561 09:17:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.561 09:17:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.561 09:17:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.561 "name": "raid_bdev1", 00:17:42.561 "uuid": "d3d6fd6f-cef5-470f-a505-ad3a7b469a29", 00:17:42.561 "strip_size_kb": 0, 00:17:42.561 "state": "online", 00:17:42.561 "raid_level": "raid1", 00:17:42.561 "superblock": true, 00:17:42.561 "num_base_bdevs": 2, 00:17:42.561 "num_base_bdevs_discovered": 1, 00:17:42.561 "num_base_bdevs_operational": 1, 00:17:42.561 "base_bdevs_list": [ 00:17:42.561 { 00:17:42.561 "name": null, 00:17:42.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.561 "is_configured": false, 00:17:42.561 "data_offset": 256, 00:17:42.561 "data_size": 7936 00:17:42.561 }, 00:17:42.561 { 00:17:42.561 "name": "pt2", 00:17:42.561 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:42.561 "is_configured": true, 00:17:42.561 "data_offset": 256, 00:17:42.561 "data_size": 7936 00:17:42.561 } 00:17:42.561 ] 00:17:42.561 }' 00:17:42.561 09:17:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.561 09:17:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.821 09:17:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:42.821 09:17:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:42.821 09:17:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.821 09:17:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.821 09:17:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.821 09:17:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:43.081 09:17:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:43.081 09:17:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:43.081 09:17:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.081 09:17:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.081 [2024-10-15 09:17:00.727882] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:43.081 09:17:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.081 09:17:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' d3d6fd6f-cef5-470f-a505-ad3a7b469a29 '!=' d3d6fd6f-cef5-470f-a505-ad3a7b469a29 ']' 00:17:43.081 09:17:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86494 00:17:43.081 09:17:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 86494 ']' 00:17:43.081 09:17:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 86494 00:17:43.081 09:17:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:17:43.081 09:17:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:43.081 09:17:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86494 00:17:43.081 killing process with pid 86494 00:17:43.081 09:17:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:43.081 09:17:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:43.081 09:17:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86494' 00:17:43.081 09:17:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 86494 00:17:43.081 [2024-10-15 09:17:00.794627] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:43.081 [2024-10-15 09:17:00.794756] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:43.081 09:17:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 86494 00:17:43.081 [2024-10-15 09:17:00.794815] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:43.081 [2024-10-15 09:17:00.794831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:43.341 [2024-10-15 09:17:01.004375] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:44.284 ************************************ 00:17:44.284 END TEST raid_superblock_test_4k 00:17:44.284 ************************************ 00:17:44.284 09:17:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:17:44.284 00:17:44.284 real 0m6.217s 00:17:44.284 user 0m9.472s 00:17:44.284 sys 0m1.101s 00:17:44.284 09:17:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:44.284 09:17:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.545 09:17:02 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:17:44.545 09:17:02 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:17:44.545 09:17:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:44.545 09:17:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:44.545 09:17:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:44.545 ************************************ 00:17:44.545 START TEST raid_rebuild_test_sb_4k 00:17:44.545 ************************************ 00:17:44.545 09:17:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:17:44.545 09:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:44.545 09:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:44.545 09:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:44.545 09:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:44.545 09:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:44.545 09:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:44.545 09:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:44.545 09:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:44.545 09:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:44.545 09:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:44.545 09:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:44.545 09:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:44.545 09:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:44.545 09:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:44.545 09:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:44.545 09:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:44.545 09:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:44.545 09:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:44.545 09:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:44.545 09:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:44.545 09:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:44.545 09:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:44.545 09:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:44.545 09:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:44.545 09:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86823 00:17:44.545 09:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86823 00:17:44.545 09:17:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:44.545 09:17:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 86823 ']' 00:17:44.545 09:17:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.545 09:17:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:44.545 09:17:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.545 09:17:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:44.545 09:17:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.545 [2024-10-15 09:17:02.310611] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:17:44.545 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:44.545 Zero copy mechanism will not be used. 00:17:44.545 [2024-10-15 09:17:02.310849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86823 ] 00:17:44.806 [2024-10-15 09:17:02.475928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.806 [2024-10-15 09:17:02.598668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.075 [2024-10-15 09:17:02.823591] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:45.075 [2024-10-15 09:17:02.823651] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:45.337 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:45.337 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:17:45.337 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:45.337 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:17:45.337 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.337 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.337 BaseBdev1_malloc 00:17:45.337 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.337 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:45.337 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.337 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.337 [2024-10-15 09:17:03.211178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:45.337 [2024-10-15 09:17:03.211254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.337 [2024-10-15 09:17:03.211283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:45.337 [2024-10-15 09:17:03.211297] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.337 [2024-10-15 09:17:03.213668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.337 [2024-10-15 09:17:03.213719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:45.337 BaseBdev1 00:17:45.337 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.337 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:45.337 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:17:45.337 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.337 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.596 BaseBdev2_malloc 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.596 [2024-10-15 09:17:03.271217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:45.596 [2024-10-15 09:17:03.271346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.596 [2024-10-15 09:17:03.271373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:45.596 [2024-10-15 09:17:03.271384] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.596 [2024-10-15 09:17:03.273507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.596 [2024-10-15 09:17:03.273550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:45.596 BaseBdev2 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.596 spare_malloc 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.596 spare_delay 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.596 [2024-10-15 09:17:03.343169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:45.596 [2024-10-15 09:17:03.343231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.596 [2024-10-15 09:17:03.343251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:45.596 [2024-10-15 09:17:03.343261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.596 [2024-10-15 09:17:03.345419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.596 [2024-10-15 09:17:03.345464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:45.596 spare 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.596 [2024-10-15 09:17:03.355201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:45.596 [2024-10-15 09:17:03.357232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:45.596 [2024-10-15 09:17:03.357490] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:45.596 [2024-10-15 09:17:03.357550] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:45.596 [2024-10-15 09:17:03.357890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:45.596 [2024-10-15 09:17:03.358123] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:45.596 [2024-10-15 09:17:03.358171] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:45.596 [2024-10-15 09:17:03.358379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.596 "name": "raid_bdev1", 00:17:45.596 "uuid": "dfb0a3fe-d77c-4ba2-b1e1-74d65435e9da", 00:17:45.596 "strip_size_kb": 0, 00:17:45.596 "state": "online", 00:17:45.596 "raid_level": "raid1", 00:17:45.596 "superblock": true, 00:17:45.596 "num_base_bdevs": 2, 00:17:45.596 "num_base_bdevs_discovered": 2, 00:17:45.596 "num_base_bdevs_operational": 2, 00:17:45.596 "base_bdevs_list": [ 00:17:45.596 { 00:17:45.596 "name": "BaseBdev1", 00:17:45.596 "uuid": "55aa8767-627d-5ed1-8846-c2702dc325f0", 00:17:45.596 "is_configured": true, 00:17:45.596 "data_offset": 256, 00:17:45.596 "data_size": 7936 00:17:45.596 }, 00:17:45.596 { 00:17:45.596 "name": "BaseBdev2", 00:17:45.596 "uuid": "4fd6cd07-4b2e-5b94-bd7f-9c19e7e51954", 00:17:45.596 "is_configured": true, 00:17:45.596 "data_offset": 256, 00:17:45.596 "data_size": 7936 00:17:45.596 } 00:17:45.596 ] 00:17:45.596 }' 00:17:45.596 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.597 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.163 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:46.163 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.163 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.163 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:46.163 [2024-10-15 09:17:03.834768] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:46.163 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.163 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:46.163 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:46.163 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.163 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.163 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.163 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.163 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:46.163 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:46.163 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:46.163 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:46.163 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:46.163 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:46.163 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:46.163 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:46.163 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:46.163 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:46.163 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:46.163 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:46.163 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:46.163 09:17:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:46.420 [2024-10-15 09:17:04.133984] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:46.420 /dev/nbd0 00:17:46.420 09:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:46.420 09:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:46.420 09:17:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:46.420 09:17:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:17:46.420 09:17:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:46.420 09:17:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:46.420 09:17:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:46.420 09:17:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:17:46.420 09:17:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:46.420 09:17:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:46.420 09:17:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:46.420 1+0 records in 00:17:46.420 1+0 records out 00:17:46.420 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302768 s, 13.5 MB/s 00:17:46.420 09:17:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:46.420 09:17:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:17:46.420 09:17:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:46.420 09:17:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:46.420 09:17:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:17:46.420 09:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:46.420 09:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:46.420 09:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:46.420 09:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:46.420 09:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:47.358 7936+0 records in 00:17:47.358 7936+0 records out 00:17:47.358 32505856 bytes (33 MB, 31 MiB) copied, 0.697392 s, 46.6 MB/s 00:17:47.358 09:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:47.358 09:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:47.358 09:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:47.358 09:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:47.358 09:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:47.358 09:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:47.358 09:17:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:47.358 [2024-10-15 09:17:05.120930] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.358 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:47.358 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:47.358 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:47.358 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:47.358 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:47.358 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:47.358 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:47.358 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:47.358 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:47.358 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.358 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.358 [2024-10-15 09:17:05.148997] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:47.358 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.358 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:47.358 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.358 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.358 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:47.358 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:47.358 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:47.358 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.358 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.358 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.358 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.358 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.358 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.358 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.358 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.358 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.358 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.358 "name": "raid_bdev1", 00:17:47.358 "uuid": "dfb0a3fe-d77c-4ba2-b1e1-74d65435e9da", 00:17:47.358 "strip_size_kb": 0, 00:17:47.358 "state": "online", 00:17:47.358 "raid_level": "raid1", 00:17:47.358 "superblock": true, 00:17:47.358 "num_base_bdevs": 2, 00:17:47.358 "num_base_bdevs_discovered": 1, 00:17:47.358 "num_base_bdevs_operational": 1, 00:17:47.358 "base_bdevs_list": [ 00:17:47.358 { 00:17:47.358 "name": null, 00:17:47.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.358 "is_configured": false, 00:17:47.358 "data_offset": 0, 00:17:47.358 "data_size": 7936 00:17:47.359 }, 00:17:47.359 { 00:17:47.359 "name": "BaseBdev2", 00:17:47.359 "uuid": "4fd6cd07-4b2e-5b94-bd7f-9c19e7e51954", 00:17:47.359 "is_configured": true, 00:17:47.359 "data_offset": 256, 00:17:47.359 "data_size": 7936 00:17:47.359 } 00:17:47.359 ] 00:17:47.359 }' 00:17:47.359 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.359 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.927 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:47.927 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.927 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.927 [2024-10-15 09:17:05.604241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:47.927 [2024-10-15 09:17:05.621197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:47.927 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.927 09:17:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:47.927 [2024-10-15 09:17:05.623129] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:48.865 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:48.865 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.865 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:48.866 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:48.866 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.866 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.866 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.866 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.866 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.866 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.866 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.866 "name": "raid_bdev1", 00:17:48.866 "uuid": "dfb0a3fe-d77c-4ba2-b1e1-74d65435e9da", 00:17:48.866 "strip_size_kb": 0, 00:17:48.866 "state": "online", 00:17:48.866 "raid_level": "raid1", 00:17:48.866 "superblock": true, 00:17:48.866 "num_base_bdevs": 2, 00:17:48.866 "num_base_bdevs_discovered": 2, 00:17:48.866 "num_base_bdevs_operational": 2, 00:17:48.866 "process": { 00:17:48.866 "type": "rebuild", 00:17:48.866 "target": "spare", 00:17:48.866 "progress": { 00:17:48.866 "blocks": 2560, 00:17:48.866 "percent": 32 00:17:48.866 } 00:17:48.866 }, 00:17:48.866 "base_bdevs_list": [ 00:17:48.866 { 00:17:48.866 "name": "spare", 00:17:48.866 "uuid": "f70ae203-8af7-5cf3-8296-f6b8a7404ed5", 00:17:48.866 "is_configured": true, 00:17:48.866 "data_offset": 256, 00:17:48.866 "data_size": 7936 00:17:48.866 }, 00:17:48.866 { 00:17:48.866 "name": "BaseBdev2", 00:17:48.866 "uuid": "4fd6cd07-4b2e-5b94-bd7f-9c19e7e51954", 00:17:48.866 "is_configured": true, 00:17:48.866 "data_offset": 256, 00:17:48.866 "data_size": 7936 00:17:48.866 } 00:17:48.866 ] 00:17:48.866 }' 00:17:48.866 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.866 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:48.866 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.129 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:49.129 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:49.129 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.129 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.129 [2024-10-15 09:17:06.778882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:49.129 [2024-10-15 09:17:06.829251] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:49.129 [2024-10-15 09:17:06.829326] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.129 [2024-10-15 09:17:06.829342] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:49.129 [2024-10-15 09:17:06.829355] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:49.129 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.129 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:49.129 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.129 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.129 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.129 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.129 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:49.129 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.129 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.129 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.129 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.129 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.129 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.129 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.129 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.129 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.129 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.129 "name": "raid_bdev1", 00:17:49.129 "uuid": "dfb0a3fe-d77c-4ba2-b1e1-74d65435e9da", 00:17:49.129 "strip_size_kb": 0, 00:17:49.129 "state": "online", 00:17:49.129 "raid_level": "raid1", 00:17:49.129 "superblock": true, 00:17:49.129 "num_base_bdevs": 2, 00:17:49.129 "num_base_bdevs_discovered": 1, 00:17:49.129 "num_base_bdevs_operational": 1, 00:17:49.129 "base_bdevs_list": [ 00:17:49.129 { 00:17:49.129 "name": null, 00:17:49.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.129 "is_configured": false, 00:17:49.129 "data_offset": 0, 00:17:49.129 "data_size": 7936 00:17:49.129 }, 00:17:49.129 { 00:17:49.129 "name": "BaseBdev2", 00:17:49.129 "uuid": "4fd6cd07-4b2e-5b94-bd7f-9c19e7e51954", 00:17:49.129 "is_configured": true, 00:17:49.129 "data_offset": 256, 00:17:49.129 "data_size": 7936 00:17:49.129 } 00:17:49.129 ] 00:17:49.129 }' 00:17:49.129 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.129 09:17:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.703 09:17:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:49.703 09:17:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.703 09:17:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:49.703 09:17:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:49.703 09:17:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.703 09:17:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.703 09:17:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.703 09:17:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.703 09:17:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.703 09:17:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.703 09:17:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.703 "name": "raid_bdev1", 00:17:49.703 "uuid": "dfb0a3fe-d77c-4ba2-b1e1-74d65435e9da", 00:17:49.703 "strip_size_kb": 0, 00:17:49.703 "state": "online", 00:17:49.703 "raid_level": "raid1", 00:17:49.703 "superblock": true, 00:17:49.703 "num_base_bdevs": 2, 00:17:49.703 "num_base_bdevs_discovered": 1, 00:17:49.703 "num_base_bdevs_operational": 1, 00:17:49.703 "base_bdevs_list": [ 00:17:49.703 { 00:17:49.703 "name": null, 00:17:49.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.703 "is_configured": false, 00:17:49.703 "data_offset": 0, 00:17:49.703 "data_size": 7936 00:17:49.703 }, 00:17:49.703 { 00:17:49.703 "name": "BaseBdev2", 00:17:49.703 "uuid": "4fd6cd07-4b2e-5b94-bd7f-9c19e7e51954", 00:17:49.703 "is_configured": true, 00:17:49.703 "data_offset": 256, 00:17:49.703 "data_size": 7936 00:17:49.703 } 00:17:49.703 ] 00:17:49.703 }' 00:17:49.703 09:17:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.703 09:17:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:49.703 09:17:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.703 09:17:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:49.703 09:17:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:49.703 09:17:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.703 09:17:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.703 [2024-10-15 09:17:07.478516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:49.703 [2024-10-15 09:17:07.495380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:49.703 09:17:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.703 09:17:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:49.703 [2024-10-15 09:17:07.497634] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:50.654 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:50.654 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:50.654 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:50.654 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:50.654 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:50.654 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.654 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.654 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.654 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.654 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.914 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:50.914 "name": "raid_bdev1", 00:17:50.914 "uuid": "dfb0a3fe-d77c-4ba2-b1e1-74d65435e9da", 00:17:50.914 "strip_size_kb": 0, 00:17:50.914 "state": "online", 00:17:50.914 "raid_level": "raid1", 00:17:50.914 "superblock": true, 00:17:50.914 "num_base_bdevs": 2, 00:17:50.914 "num_base_bdevs_discovered": 2, 00:17:50.914 "num_base_bdevs_operational": 2, 00:17:50.914 "process": { 00:17:50.914 "type": "rebuild", 00:17:50.914 "target": "spare", 00:17:50.914 "progress": { 00:17:50.914 "blocks": 2560, 00:17:50.914 "percent": 32 00:17:50.914 } 00:17:50.914 }, 00:17:50.914 "base_bdevs_list": [ 00:17:50.914 { 00:17:50.914 "name": "spare", 00:17:50.914 "uuid": "f70ae203-8af7-5cf3-8296-f6b8a7404ed5", 00:17:50.914 "is_configured": true, 00:17:50.914 "data_offset": 256, 00:17:50.914 "data_size": 7936 00:17:50.914 }, 00:17:50.914 { 00:17:50.914 "name": "BaseBdev2", 00:17:50.914 "uuid": "4fd6cd07-4b2e-5b94-bd7f-9c19e7e51954", 00:17:50.914 "is_configured": true, 00:17:50.914 "data_offset": 256, 00:17:50.914 "data_size": 7936 00:17:50.914 } 00:17:50.914 ] 00:17:50.914 }' 00:17:50.914 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:50.914 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:50.914 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:50.914 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:50.914 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:50.914 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:50.914 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:50.914 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:50.914 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:50.914 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:50.914 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=712 00:17:50.914 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:50.914 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:50.914 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:50.914 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:50.914 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:50.914 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:50.914 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.914 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.914 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.914 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.914 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.914 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:50.914 "name": "raid_bdev1", 00:17:50.914 "uuid": "dfb0a3fe-d77c-4ba2-b1e1-74d65435e9da", 00:17:50.914 "strip_size_kb": 0, 00:17:50.914 "state": "online", 00:17:50.914 "raid_level": "raid1", 00:17:50.914 "superblock": true, 00:17:50.914 "num_base_bdevs": 2, 00:17:50.914 "num_base_bdevs_discovered": 2, 00:17:50.914 "num_base_bdevs_operational": 2, 00:17:50.914 "process": { 00:17:50.914 "type": "rebuild", 00:17:50.914 "target": "spare", 00:17:50.914 "progress": { 00:17:50.914 "blocks": 2816, 00:17:50.914 "percent": 35 00:17:50.914 } 00:17:50.914 }, 00:17:50.914 "base_bdevs_list": [ 00:17:50.914 { 00:17:50.914 "name": "spare", 00:17:50.914 "uuid": "f70ae203-8af7-5cf3-8296-f6b8a7404ed5", 00:17:50.914 "is_configured": true, 00:17:50.914 "data_offset": 256, 00:17:50.914 "data_size": 7936 00:17:50.915 }, 00:17:50.915 { 00:17:50.915 "name": "BaseBdev2", 00:17:50.915 "uuid": "4fd6cd07-4b2e-5b94-bd7f-9c19e7e51954", 00:17:50.915 "is_configured": true, 00:17:50.915 "data_offset": 256, 00:17:50.915 "data_size": 7936 00:17:50.915 } 00:17:50.915 ] 00:17:50.915 }' 00:17:50.915 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:50.915 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:50.915 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:50.915 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:50.915 09:17:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:52.294 09:17:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:52.294 09:17:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:52.294 09:17:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.294 09:17:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:52.294 09:17:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:52.294 09:17:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.294 09:17:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.294 09:17:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.294 09:17:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.294 09:17:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.294 09:17:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.294 09:17:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.294 "name": "raid_bdev1", 00:17:52.294 "uuid": "dfb0a3fe-d77c-4ba2-b1e1-74d65435e9da", 00:17:52.294 "strip_size_kb": 0, 00:17:52.294 "state": "online", 00:17:52.294 "raid_level": "raid1", 00:17:52.294 "superblock": true, 00:17:52.294 "num_base_bdevs": 2, 00:17:52.294 "num_base_bdevs_discovered": 2, 00:17:52.294 "num_base_bdevs_operational": 2, 00:17:52.294 "process": { 00:17:52.294 "type": "rebuild", 00:17:52.294 "target": "spare", 00:17:52.294 "progress": { 00:17:52.294 "blocks": 5632, 00:17:52.294 "percent": 70 00:17:52.294 } 00:17:52.294 }, 00:17:52.294 "base_bdevs_list": [ 00:17:52.294 { 00:17:52.294 "name": "spare", 00:17:52.294 "uuid": "f70ae203-8af7-5cf3-8296-f6b8a7404ed5", 00:17:52.294 "is_configured": true, 00:17:52.294 "data_offset": 256, 00:17:52.294 "data_size": 7936 00:17:52.294 }, 00:17:52.294 { 00:17:52.294 "name": "BaseBdev2", 00:17:52.294 "uuid": "4fd6cd07-4b2e-5b94-bd7f-9c19e7e51954", 00:17:52.294 "is_configured": true, 00:17:52.294 "data_offset": 256, 00:17:52.294 "data_size": 7936 00:17:52.294 } 00:17:52.294 ] 00:17:52.294 }' 00:17:52.294 09:17:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.294 09:17:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:52.294 09:17:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.294 09:17:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:52.294 09:17:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:52.863 [2024-10-15 09:17:10.613153] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:52.863 [2024-10-15 09:17:10.613242] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:52.863 [2024-10-15 09:17:10.613397] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.123 09:17:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:53.123 09:17:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:53.123 09:17:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:53.123 09:17:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:53.123 09:17:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:53.123 09:17:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:53.123 09:17:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.123 09:17:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.123 09:17:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.123 09:17:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.123 09:17:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.124 09:17:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:53.124 "name": "raid_bdev1", 00:17:53.124 "uuid": "dfb0a3fe-d77c-4ba2-b1e1-74d65435e9da", 00:17:53.124 "strip_size_kb": 0, 00:17:53.124 "state": "online", 00:17:53.124 "raid_level": "raid1", 00:17:53.124 "superblock": true, 00:17:53.124 "num_base_bdevs": 2, 00:17:53.124 "num_base_bdevs_discovered": 2, 00:17:53.124 "num_base_bdevs_operational": 2, 00:17:53.124 "base_bdevs_list": [ 00:17:53.124 { 00:17:53.124 "name": "spare", 00:17:53.124 "uuid": "f70ae203-8af7-5cf3-8296-f6b8a7404ed5", 00:17:53.124 "is_configured": true, 00:17:53.124 "data_offset": 256, 00:17:53.124 "data_size": 7936 00:17:53.124 }, 00:17:53.124 { 00:17:53.124 "name": "BaseBdev2", 00:17:53.124 "uuid": "4fd6cd07-4b2e-5b94-bd7f-9c19e7e51954", 00:17:53.124 "is_configured": true, 00:17:53.124 "data_offset": 256, 00:17:53.124 "data_size": 7936 00:17:53.124 } 00:17:53.124 ] 00:17:53.124 }' 00:17:53.124 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:53.388 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:53.388 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:53.388 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:53.388 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:17:53.388 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:53.388 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:53.388 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:53.388 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:53.388 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:53.388 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.388 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.388 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.388 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.388 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.388 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:53.388 "name": "raid_bdev1", 00:17:53.388 "uuid": "dfb0a3fe-d77c-4ba2-b1e1-74d65435e9da", 00:17:53.388 "strip_size_kb": 0, 00:17:53.388 "state": "online", 00:17:53.388 "raid_level": "raid1", 00:17:53.388 "superblock": true, 00:17:53.388 "num_base_bdevs": 2, 00:17:53.388 "num_base_bdevs_discovered": 2, 00:17:53.388 "num_base_bdevs_operational": 2, 00:17:53.388 "base_bdevs_list": [ 00:17:53.388 { 00:17:53.388 "name": "spare", 00:17:53.388 "uuid": "f70ae203-8af7-5cf3-8296-f6b8a7404ed5", 00:17:53.388 "is_configured": true, 00:17:53.388 "data_offset": 256, 00:17:53.388 "data_size": 7936 00:17:53.388 }, 00:17:53.388 { 00:17:53.388 "name": "BaseBdev2", 00:17:53.388 "uuid": "4fd6cd07-4b2e-5b94-bd7f-9c19e7e51954", 00:17:53.388 "is_configured": true, 00:17:53.388 "data_offset": 256, 00:17:53.388 "data_size": 7936 00:17:53.388 } 00:17:53.388 ] 00:17:53.388 }' 00:17:53.388 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:53.388 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:53.388 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:53.388 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:53.388 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:53.388 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.388 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.388 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.388 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.388 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:53.388 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.388 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.388 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.388 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.388 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.388 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.388 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.388 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.388 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.653 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.653 "name": "raid_bdev1", 00:17:53.653 "uuid": "dfb0a3fe-d77c-4ba2-b1e1-74d65435e9da", 00:17:53.653 "strip_size_kb": 0, 00:17:53.653 "state": "online", 00:17:53.653 "raid_level": "raid1", 00:17:53.653 "superblock": true, 00:17:53.653 "num_base_bdevs": 2, 00:17:53.653 "num_base_bdevs_discovered": 2, 00:17:53.653 "num_base_bdevs_operational": 2, 00:17:53.653 "base_bdevs_list": [ 00:17:53.653 { 00:17:53.653 "name": "spare", 00:17:53.653 "uuid": "f70ae203-8af7-5cf3-8296-f6b8a7404ed5", 00:17:53.653 "is_configured": true, 00:17:53.653 "data_offset": 256, 00:17:53.653 "data_size": 7936 00:17:53.653 }, 00:17:53.653 { 00:17:53.653 "name": "BaseBdev2", 00:17:53.653 "uuid": "4fd6cd07-4b2e-5b94-bd7f-9c19e7e51954", 00:17:53.653 "is_configured": true, 00:17:53.653 "data_offset": 256, 00:17:53.653 "data_size": 7936 00:17:53.653 } 00:17:53.653 ] 00:17:53.653 }' 00:17:53.653 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.653 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.920 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:53.920 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.920 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.920 [2024-10-15 09:17:11.742425] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:53.920 [2024-10-15 09:17:11.742467] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:53.920 [2024-10-15 09:17:11.742568] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:53.920 [2024-10-15 09:17:11.742647] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:53.920 [2024-10-15 09:17:11.742659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:53.920 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.920 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.920 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.920 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.920 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:17:53.920 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.920 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:53.920 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:53.920 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:53.920 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:53.920 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:53.920 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:53.920 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:53.920 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:53.920 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:53.920 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:53.920 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:53.920 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:53.920 09:17:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:54.189 /dev/nbd0 00:17:54.189 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:54.189 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:54.189 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:54.189 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:17:54.189 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:54.189 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:54.189 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:54.189 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:17:54.189 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:54.189 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:54.189 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:54.189 1+0 records in 00:17:54.189 1+0 records out 00:17:54.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526899 s, 7.8 MB/s 00:17:54.189 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:54.189 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:17:54.189 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:54.189 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:54.189 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:17:54.189 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:54.189 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:54.189 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:54.451 /dev/nbd1 00:17:54.451 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:54.451 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:54.451 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:54.451 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:17:54.451 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:54.451 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:54.451 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:54.451 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:17:54.451 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:54.451 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:54.451 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:54.451 1+0 records in 00:17:54.451 1+0 records out 00:17:54.451 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448408 s, 9.1 MB/s 00:17:54.451 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:54.451 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:17:54.451 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:54.451 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:54.451 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:17:54.710 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:54.710 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:54.710 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:54.710 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:54.710 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:54.710 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:54.710 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:54.710 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:54.710 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:54.710 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:54.970 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:54.970 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:54.970 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:54.970 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:54.970 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:54.970 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:54.970 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:54.970 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:54.970 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:54.970 09:17:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:55.229 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:55.230 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:55.230 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:55.230 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:55.230 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:55.230 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:55.230 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:55.230 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:55.230 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:55.230 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:55.230 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.230 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.230 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.230 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:55.230 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.230 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.230 [2024-10-15 09:17:13.091896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:55.230 [2024-10-15 09:17:13.091963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:55.230 [2024-10-15 09:17:13.091991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:55.230 [2024-10-15 09:17:13.092001] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:55.230 [2024-10-15 09:17:13.094413] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:55.230 [2024-10-15 09:17:13.094455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:55.230 [2024-10-15 09:17:13.094577] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:55.230 [2024-10-15 09:17:13.094645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:55.230 [2024-10-15 09:17:13.094848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:55.230 spare 00:17:55.230 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.230 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:55.230 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.230 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.489 [2024-10-15 09:17:13.194770] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:55.489 [2024-10-15 09:17:13.194904] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:55.489 [2024-10-15 09:17:13.195285] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:55.489 [2024-10-15 09:17:13.195496] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:55.489 [2024-10-15 09:17:13.195512] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:55.489 [2024-10-15 09:17:13.195789] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:55.489 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.489 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:55.489 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:55.489 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:55.489 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:55.489 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:55.489 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:55.489 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.489 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.489 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.489 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.489 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.489 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.489 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.489 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.489 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.489 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.489 "name": "raid_bdev1", 00:17:55.489 "uuid": "dfb0a3fe-d77c-4ba2-b1e1-74d65435e9da", 00:17:55.489 "strip_size_kb": 0, 00:17:55.489 "state": "online", 00:17:55.489 "raid_level": "raid1", 00:17:55.489 "superblock": true, 00:17:55.489 "num_base_bdevs": 2, 00:17:55.489 "num_base_bdevs_discovered": 2, 00:17:55.489 "num_base_bdevs_operational": 2, 00:17:55.489 "base_bdevs_list": [ 00:17:55.489 { 00:17:55.489 "name": "spare", 00:17:55.489 "uuid": "f70ae203-8af7-5cf3-8296-f6b8a7404ed5", 00:17:55.489 "is_configured": true, 00:17:55.489 "data_offset": 256, 00:17:55.489 "data_size": 7936 00:17:55.489 }, 00:17:55.489 { 00:17:55.489 "name": "BaseBdev2", 00:17:55.489 "uuid": "4fd6cd07-4b2e-5b94-bd7f-9c19e7e51954", 00:17:55.489 "is_configured": true, 00:17:55.489 "data_offset": 256, 00:17:55.489 "data_size": 7936 00:17:55.489 } 00:17:55.489 ] 00:17:55.489 }' 00:17:55.489 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.489 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.056 "name": "raid_bdev1", 00:17:56.056 "uuid": "dfb0a3fe-d77c-4ba2-b1e1-74d65435e9da", 00:17:56.056 "strip_size_kb": 0, 00:17:56.056 "state": "online", 00:17:56.056 "raid_level": "raid1", 00:17:56.056 "superblock": true, 00:17:56.056 "num_base_bdevs": 2, 00:17:56.056 "num_base_bdevs_discovered": 2, 00:17:56.056 "num_base_bdevs_operational": 2, 00:17:56.056 "base_bdevs_list": [ 00:17:56.056 { 00:17:56.056 "name": "spare", 00:17:56.056 "uuid": "f70ae203-8af7-5cf3-8296-f6b8a7404ed5", 00:17:56.056 "is_configured": true, 00:17:56.056 "data_offset": 256, 00:17:56.056 "data_size": 7936 00:17:56.056 }, 00:17:56.056 { 00:17:56.056 "name": "BaseBdev2", 00:17:56.056 "uuid": "4fd6cd07-4b2e-5b94-bd7f-9c19e7e51954", 00:17:56.056 "is_configured": true, 00:17:56.056 "data_offset": 256, 00:17:56.056 "data_size": 7936 00:17:56.056 } 00:17:56.056 ] 00:17:56.056 }' 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.056 [2024-10-15 09:17:13.826820] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.056 "name": "raid_bdev1", 00:17:56.056 "uuid": "dfb0a3fe-d77c-4ba2-b1e1-74d65435e9da", 00:17:56.056 "strip_size_kb": 0, 00:17:56.056 "state": "online", 00:17:56.056 "raid_level": "raid1", 00:17:56.056 "superblock": true, 00:17:56.056 "num_base_bdevs": 2, 00:17:56.056 "num_base_bdevs_discovered": 1, 00:17:56.056 "num_base_bdevs_operational": 1, 00:17:56.056 "base_bdevs_list": [ 00:17:56.056 { 00:17:56.056 "name": null, 00:17:56.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.056 "is_configured": false, 00:17:56.056 "data_offset": 0, 00:17:56.056 "data_size": 7936 00:17:56.056 }, 00:17:56.056 { 00:17:56.056 "name": "BaseBdev2", 00:17:56.056 "uuid": "4fd6cd07-4b2e-5b94-bd7f-9c19e7e51954", 00:17:56.056 "is_configured": true, 00:17:56.056 "data_offset": 256, 00:17:56.056 "data_size": 7936 00:17:56.056 } 00:17:56.056 ] 00:17:56.056 }' 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.056 09:17:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.623 09:17:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:56.623 09:17:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.623 09:17:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.623 [2024-10-15 09:17:14.286056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:56.623 [2024-10-15 09:17:14.286364] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:56.623 [2024-10-15 09:17:14.286443] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:56.623 [2024-10-15 09:17:14.286513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:56.624 [2024-10-15 09:17:14.304535] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:56.624 09:17:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.624 09:17:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:56.624 [2024-10-15 09:17:14.306792] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:57.559 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:57.559 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:57.559 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:57.559 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:57.559 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:57.559 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.559 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.559 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.559 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.559 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.559 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:57.559 "name": "raid_bdev1", 00:17:57.559 "uuid": "dfb0a3fe-d77c-4ba2-b1e1-74d65435e9da", 00:17:57.559 "strip_size_kb": 0, 00:17:57.559 "state": "online", 00:17:57.559 "raid_level": "raid1", 00:17:57.559 "superblock": true, 00:17:57.559 "num_base_bdevs": 2, 00:17:57.559 "num_base_bdevs_discovered": 2, 00:17:57.559 "num_base_bdevs_operational": 2, 00:17:57.559 "process": { 00:17:57.559 "type": "rebuild", 00:17:57.559 "target": "spare", 00:17:57.559 "progress": { 00:17:57.559 "blocks": 2560, 00:17:57.559 "percent": 32 00:17:57.559 } 00:17:57.559 }, 00:17:57.559 "base_bdevs_list": [ 00:17:57.559 { 00:17:57.559 "name": "spare", 00:17:57.560 "uuid": "f70ae203-8af7-5cf3-8296-f6b8a7404ed5", 00:17:57.560 "is_configured": true, 00:17:57.560 "data_offset": 256, 00:17:57.560 "data_size": 7936 00:17:57.560 }, 00:17:57.560 { 00:17:57.560 "name": "BaseBdev2", 00:17:57.560 "uuid": "4fd6cd07-4b2e-5b94-bd7f-9c19e7e51954", 00:17:57.560 "is_configured": true, 00:17:57.560 "data_offset": 256, 00:17:57.560 "data_size": 7936 00:17:57.560 } 00:17:57.560 ] 00:17:57.560 }' 00:17:57.560 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:57.560 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:57.560 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:57.560 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:57.560 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:57.560 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.560 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.560 [2024-10-15 09:17:15.450530] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:57.819 [2024-10-15 09:17:15.512855] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:57.819 [2024-10-15 09:17:15.513019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.819 [2024-10-15 09:17:15.513037] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:57.819 [2024-10-15 09:17:15.513046] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:57.819 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.819 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:57.819 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.819 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.819 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:57.819 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:57.819 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:57.819 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.819 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.819 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.819 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.819 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.819 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.819 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.819 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.819 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.819 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.819 "name": "raid_bdev1", 00:17:57.819 "uuid": "dfb0a3fe-d77c-4ba2-b1e1-74d65435e9da", 00:17:57.819 "strip_size_kb": 0, 00:17:57.819 "state": "online", 00:17:57.819 "raid_level": "raid1", 00:17:57.819 "superblock": true, 00:17:57.819 "num_base_bdevs": 2, 00:17:57.819 "num_base_bdevs_discovered": 1, 00:17:57.819 "num_base_bdevs_operational": 1, 00:17:57.819 "base_bdevs_list": [ 00:17:57.819 { 00:17:57.819 "name": null, 00:17:57.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.819 "is_configured": false, 00:17:57.819 "data_offset": 0, 00:17:57.819 "data_size": 7936 00:17:57.819 }, 00:17:57.819 { 00:17:57.819 "name": "BaseBdev2", 00:17:57.819 "uuid": "4fd6cd07-4b2e-5b94-bd7f-9c19e7e51954", 00:17:57.819 "is_configured": true, 00:17:57.819 "data_offset": 256, 00:17:57.819 "data_size": 7936 00:17:57.819 } 00:17:57.819 ] 00:17:57.819 }' 00:17:57.819 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.819 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.392 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:58.392 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.392 09:17:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.392 [2024-10-15 09:17:15.993818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:58.392 [2024-10-15 09:17:15.993952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.392 [2024-10-15 09:17:15.994006] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:58.392 [2024-10-15 09:17:15.994043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.392 [2024-10-15 09:17:15.994623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.392 [2024-10-15 09:17:15.994726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:58.392 [2024-10-15 09:17:15.994884] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:58.392 [2024-10-15 09:17:15.994929] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:58.392 [2024-10-15 09:17:15.994973] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:58.392 [2024-10-15 09:17:15.995049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:58.392 [2024-10-15 09:17:16.013477] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:58.392 spare 00:17:58.392 09:17:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.392 09:17:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:58.392 [2024-10-15 09:17:16.015814] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:59.331 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:59.331 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.331 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:59.331 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:59.331 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.331 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.332 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.332 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.332 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.332 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.332 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.332 "name": "raid_bdev1", 00:17:59.332 "uuid": "dfb0a3fe-d77c-4ba2-b1e1-74d65435e9da", 00:17:59.332 "strip_size_kb": 0, 00:17:59.332 "state": "online", 00:17:59.332 "raid_level": "raid1", 00:17:59.332 "superblock": true, 00:17:59.332 "num_base_bdevs": 2, 00:17:59.332 "num_base_bdevs_discovered": 2, 00:17:59.332 "num_base_bdevs_operational": 2, 00:17:59.332 "process": { 00:17:59.332 "type": "rebuild", 00:17:59.332 "target": "spare", 00:17:59.332 "progress": { 00:17:59.332 "blocks": 2560, 00:17:59.332 "percent": 32 00:17:59.332 } 00:17:59.332 }, 00:17:59.332 "base_bdevs_list": [ 00:17:59.332 { 00:17:59.332 "name": "spare", 00:17:59.332 "uuid": "f70ae203-8af7-5cf3-8296-f6b8a7404ed5", 00:17:59.332 "is_configured": true, 00:17:59.332 "data_offset": 256, 00:17:59.332 "data_size": 7936 00:17:59.332 }, 00:17:59.332 { 00:17:59.332 "name": "BaseBdev2", 00:17:59.332 "uuid": "4fd6cd07-4b2e-5b94-bd7f-9c19e7e51954", 00:17:59.332 "is_configured": true, 00:17:59.332 "data_offset": 256, 00:17:59.332 "data_size": 7936 00:17:59.332 } 00:17:59.332 ] 00:17:59.332 }' 00:17:59.332 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:59.332 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:59.332 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:59.332 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:59.332 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:59.332 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.332 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.332 [2024-10-15 09:17:17.186862] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:59.332 [2024-10-15 09:17:17.221850] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:59.332 [2024-10-15 09:17:17.221922] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.332 [2024-10-15 09:17:17.221942] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:59.332 [2024-10-15 09:17:17.221952] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:59.591 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.591 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:59.591 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.591 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.591 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.591 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.591 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:59.591 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.591 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.591 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.591 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.591 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.591 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.591 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.591 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.591 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.591 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.591 "name": "raid_bdev1", 00:17:59.591 "uuid": "dfb0a3fe-d77c-4ba2-b1e1-74d65435e9da", 00:17:59.591 "strip_size_kb": 0, 00:17:59.591 "state": "online", 00:17:59.591 "raid_level": "raid1", 00:17:59.591 "superblock": true, 00:17:59.591 "num_base_bdevs": 2, 00:17:59.591 "num_base_bdevs_discovered": 1, 00:17:59.591 "num_base_bdevs_operational": 1, 00:17:59.591 "base_bdevs_list": [ 00:17:59.591 { 00:17:59.591 "name": null, 00:17:59.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.591 "is_configured": false, 00:17:59.591 "data_offset": 0, 00:17:59.591 "data_size": 7936 00:17:59.591 }, 00:17:59.591 { 00:17:59.591 "name": "BaseBdev2", 00:17:59.591 "uuid": "4fd6cd07-4b2e-5b94-bd7f-9c19e7e51954", 00:17:59.591 "is_configured": true, 00:17:59.591 "data_offset": 256, 00:17:59.591 "data_size": 7936 00:17:59.592 } 00:17:59.592 ] 00:17:59.592 }' 00:17:59.592 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.592 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.851 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:59.851 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.851 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:59.851 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:59.851 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.851 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.851 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.851 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.851 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.851 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.111 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.111 "name": "raid_bdev1", 00:18:00.111 "uuid": "dfb0a3fe-d77c-4ba2-b1e1-74d65435e9da", 00:18:00.111 "strip_size_kb": 0, 00:18:00.111 "state": "online", 00:18:00.111 "raid_level": "raid1", 00:18:00.111 "superblock": true, 00:18:00.111 "num_base_bdevs": 2, 00:18:00.111 "num_base_bdevs_discovered": 1, 00:18:00.111 "num_base_bdevs_operational": 1, 00:18:00.111 "base_bdevs_list": [ 00:18:00.111 { 00:18:00.111 "name": null, 00:18:00.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.111 "is_configured": false, 00:18:00.111 "data_offset": 0, 00:18:00.111 "data_size": 7936 00:18:00.111 }, 00:18:00.111 { 00:18:00.111 "name": "BaseBdev2", 00:18:00.111 "uuid": "4fd6cd07-4b2e-5b94-bd7f-9c19e7e51954", 00:18:00.111 "is_configured": true, 00:18:00.111 "data_offset": 256, 00:18:00.112 "data_size": 7936 00:18:00.112 } 00:18:00.112 ] 00:18:00.112 }' 00:18:00.112 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.112 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:00.112 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.112 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:00.112 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:00.112 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.112 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.112 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.112 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:00.112 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.112 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.112 [2024-10-15 09:17:17.867971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:00.112 [2024-10-15 09:17:17.868033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.112 [2024-10-15 09:17:17.868057] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:00.112 [2024-10-15 09:17:17.868074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.112 [2024-10-15 09:17:17.868544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.112 [2024-10-15 09:17:17.868560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:00.112 [2024-10-15 09:17:17.868646] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:00.112 [2024-10-15 09:17:17.868660] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:00.112 [2024-10-15 09:17:17.868670] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:00.112 [2024-10-15 09:17:17.868679] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:00.112 BaseBdev1 00:18:00.112 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.112 09:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:01.049 09:17:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:01.049 09:17:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.049 09:17:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.049 09:17:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.049 09:17:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.049 09:17:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:01.049 09:17:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.049 09:17:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.049 09:17:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.049 09:17:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.049 09:17:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.049 09:17:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.049 09:17:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.049 09:17:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.049 09:17:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.049 09:17:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.049 "name": "raid_bdev1", 00:18:01.049 "uuid": "dfb0a3fe-d77c-4ba2-b1e1-74d65435e9da", 00:18:01.049 "strip_size_kb": 0, 00:18:01.049 "state": "online", 00:18:01.049 "raid_level": "raid1", 00:18:01.049 "superblock": true, 00:18:01.049 "num_base_bdevs": 2, 00:18:01.049 "num_base_bdevs_discovered": 1, 00:18:01.049 "num_base_bdevs_operational": 1, 00:18:01.049 "base_bdevs_list": [ 00:18:01.049 { 00:18:01.049 "name": null, 00:18:01.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.049 "is_configured": false, 00:18:01.049 "data_offset": 0, 00:18:01.049 "data_size": 7936 00:18:01.049 }, 00:18:01.049 { 00:18:01.049 "name": "BaseBdev2", 00:18:01.049 "uuid": "4fd6cd07-4b2e-5b94-bd7f-9c19e7e51954", 00:18:01.049 "is_configured": true, 00:18:01.049 "data_offset": 256, 00:18:01.049 "data_size": 7936 00:18:01.049 } 00:18:01.049 ] 00:18:01.049 }' 00:18:01.049 09:17:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.049 09:17:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.617 09:17:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:01.617 09:17:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.617 09:17:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:01.617 09:17:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:01.617 09:17:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.617 09:17:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.617 09:17:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.617 09:17:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.617 09:17:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.618 09:17:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.618 09:17:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.618 "name": "raid_bdev1", 00:18:01.618 "uuid": "dfb0a3fe-d77c-4ba2-b1e1-74d65435e9da", 00:18:01.618 "strip_size_kb": 0, 00:18:01.618 "state": "online", 00:18:01.618 "raid_level": "raid1", 00:18:01.618 "superblock": true, 00:18:01.618 "num_base_bdevs": 2, 00:18:01.618 "num_base_bdevs_discovered": 1, 00:18:01.618 "num_base_bdevs_operational": 1, 00:18:01.618 "base_bdevs_list": [ 00:18:01.618 { 00:18:01.618 "name": null, 00:18:01.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.618 "is_configured": false, 00:18:01.618 "data_offset": 0, 00:18:01.618 "data_size": 7936 00:18:01.618 }, 00:18:01.618 { 00:18:01.618 "name": "BaseBdev2", 00:18:01.618 "uuid": "4fd6cd07-4b2e-5b94-bd7f-9c19e7e51954", 00:18:01.618 "is_configured": true, 00:18:01.618 "data_offset": 256, 00:18:01.618 "data_size": 7936 00:18:01.618 } 00:18:01.618 ] 00:18:01.618 }' 00:18:01.618 09:17:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.618 09:17:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:01.618 09:17:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.618 09:17:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:01.618 09:17:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:01.618 09:17:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:18:01.618 09:17:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:01.618 09:17:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:01.618 09:17:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:01.618 09:17:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:01.618 09:17:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:01.618 09:17:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:01.618 09:17:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.618 09:17:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.618 [2024-10-15 09:17:19.509472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:01.618 [2024-10-15 09:17:19.509747] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:01.618 [2024-10-15 09:17:19.509819] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:01.618 request: 00:18:01.618 { 00:18:01.878 "base_bdev": "BaseBdev1", 00:18:01.878 "raid_bdev": "raid_bdev1", 00:18:01.878 "method": "bdev_raid_add_base_bdev", 00:18:01.878 "req_id": 1 00:18:01.878 } 00:18:01.878 Got JSON-RPC error response 00:18:01.878 response: 00:18:01.878 { 00:18:01.878 "code": -22, 00:18:01.878 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:01.878 } 00:18:01.878 09:17:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:01.878 09:17:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:18:01.878 09:17:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:01.878 09:17:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:01.878 09:17:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:01.878 09:17:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:02.818 09:17:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:02.818 09:17:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.818 09:17:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.818 09:17:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.818 09:17:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.818 09:17:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:02.818 09:17:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.818 09:17:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.818 09:17:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.818 09:17:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.818 09:17:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.818 09:17:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.818 09:17:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.818 09:17:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.818 09:17:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.818 09:17:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.818 "name": "raid_bdev1", 00:18:02.818 "uuid": "dfb0a3fe-d77c-4ba2-b1e1-74d65435e9da", 00:18:02.818 "strip_size_kb": 0, 00:18:02.818 "state": "online", 00:18:02.818 "raid_level": "raid1", 00:18:02.818 "superblock": true, 00:18:02.818 "num_base_bdevs": 2, 00:18:02.818 "num_base_bdevs_discovered": 1, 00:18:02.818 "num_base_bdevs_operational": 1, 00:18:02.818 "base_bdevs_list": [ 00:18:02.818 { 00:18:02.818 "name": null, 00:18:02.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.818 "is_configured": false, 00:18:02.818 "data_offset": 0, 00:18:02.818 "data_size": 7936 00:18:02.818 }, 00:18:02.818 { 00:18:02.818 "name": "BaseBdev2", 00:18:02.818 "uuid": "4fd6cd07-4b2e-5b94-bd7f-9c19e7e51954", 00:18:02.818 "is_configured": true, 00:18:02.818 "data_offset": 256, 00:18:02.818 "data_size": 7936 00:18:02.818 } 00:18:02.818 ] 00:18:02.818 }' 00:18:02.818 09:17:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.818 09:17:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.388 09:17:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:03.388 09:17:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.388 09:17:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:03.388 09:17:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:03.388 09:17:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.388 09:17:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.388 09:17:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.388 09:17:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.388 09:17:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.388 09:17:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.388 09:17:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.388 "name": "raid_bdev1", 00:18:03.388 "uuid": "dfb0a3fe-d77c-4ba2-b1e1-74d65435e9da", 00:18:03.388 "strip_size_kb": 0, 00:18:03.388 "state": "online", 00:18:03.388 "raid_level": "raid1", 00:18:03.388 "superblock": true, 00:18:03.388 "num_base_bdevs": 2, 00:18:03.388 "num_base_bdevs_discovered": 1, 00:18:03.388 "num_base_bdevs_operational": 1, 00:18:03.388 "base_bdevs_list": [ 00:18:03.388 { 00:18:03.388 "name": null, 00:18:03.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.388 "is_configured": false, 00:18:03.388 "data_offset": 0, 00:18:03.388 "data_size": 7936 00:18:03.388 }, 00:18:03.388 { 00:18:03.388 "name": "BaseBdev2", 00:18:03.388 "uuid": "4fd6cd07-4b2e-5b94-bd7f-9c19e7e51954", 00:18:03.388 "is_configured": true, 00:18:03.388 "data_offset": 256, 00:18:03.388 "data_size": 7936 00:18:03.388 } 00:18:03.388 ] 00:18:03.388 }' 00:18:03.388 09:17:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.388 09:17:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:03.388 09:17:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.388 09:17:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:03.388 09:17:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86823 00:18:03.388 09:17:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 86823 ']' 00:18:03.388 09:17:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 86823 00:18:03.388 09:17:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:18:03.388 09:17:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:03.388 09:17:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86823 00:18:03.388 killing process with pid 86823 00:18:03.388 Received shutdown signal, test time was about 60.000000 seconds 00:18:03.389 00:18:03.389 Latency(us) 00:18:03.389 [2024-10-15T09:17:21.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.389 [2024-10-15T09:17:21.285Z] =================================================================================================================== 00:18:03.389 [2024-10-15T09:17:21.285Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:03.389 09:17:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:03.389 09:17:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:03.389 09:17:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86823' 00:18:03.389 09:17:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 86823 00:18:03.389 [2024-10-15 09:17:21.177608] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:03.389 [2024-10-15 09:17:21.177790] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.389 09:17:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 86823 00:18:03.389 [2024-10-15 09:17:21.177850] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:03.389 [2024-10-15 09:17:21.177865] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:03.649 [2024-10-15 09:17:21.513398] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:05.034 09:17:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:18:05.034 00:18:05.034 real 0m20.515s 00:18:05.034 user 0m26.821s 00:18:05.034 sys 0m2.785s 00:18:05.034 09:17:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:05.034 09:17:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.034 ************************************ 00:18:05.034 END TEST raid_rebuild_test_sb_4k 00:18:05.034 ************************************ 00:18:05.034 09:17:22 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:18:05.034 09:17:22 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:18:05.034 09:17:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:05.034 09:17:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:05.034 09:17:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:05.034 ************************************ 00:18:05.034 START TEST raid_state_function_test_sb_md_separate 00:18:05.034 ************************************ 00:18:05.034 09:17:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:18:05.034 09:17:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:05.034 09:17:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:05.034 09:17:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:05.034 09:17:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:05.034 09:17:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:05.034 09:17:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:05.034 09:17:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:05.034 09:17:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:05.034 09:17:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:05.034 09:17:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:05.034 09:17:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:05.034 09:17:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:05.034 09:17:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:05.034 09:17:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:05.034 09:17:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:05.034 09:17:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:05.034 09:17:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:05.035 09:17:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:05.035 09:17:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:05.035 09:17:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:05.035 09:17:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:05.035 09:17:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:05.035 09:17:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87523 00:18:05.035 09:17:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:05.035 Process raid pid: 87523 00:18:05.035 09:17:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87523' 00:18:05.035 09:17:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87523 00:18:05.035 09:17:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 87523 ']' 00:18:05.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.035 09:17:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.035 09:17:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:05.035 09:17:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.035 09:17:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:05.035 09:17:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.035 [2024-10-15 09:17:22.892237] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:18:05.035 [2024-10-15 09:17:22.892450] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.294 [2024-10-15 09:17:23.059018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.294 [2024-10-15 09:17:23.182845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.553 [2024-10-15 09:17:23.405191] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:05.553 [2024-10-15 09:17:23.405343] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:06.121 09:17:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:06.121 09:17:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:18:06.121 09:17:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:06.121 09:17:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.121 09:17:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.121 [2024-10-15 09:17:23.767675] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:06.121 [2024-10-15 09:17:23.767814] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:06.121 [2024-10-15 09:17:23.767830] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:06.121 [2024-10-15 09:17:23.767842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:06.121 09:17:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.121 09:17:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:06.121 09:17:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:06.121 09:17:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:06.121 09:17:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.121 09:17:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.121 09:17:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:06.121 09:17:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.121 09:17:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.121 09:17:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.121 09:17:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.121 09:17:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.121 09:17:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.121 09:17:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.121 09:17:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.121 09:17:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.121 09:17:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.121 "name": "Existed_Raid", 00:18:06.121 "uuid": "7f9f5178-6a06-4ae1-a18f-36f1650f2374", 00:18:06.122 "strip_size_kb": 0, 00:18:06.122 "state": "configuring", 00:18:06.122 "raid_level": "raid1", 00:18:06.122 "superblock": true, 00:18:06.122 "num_base_bdevs": 2, 00:18:06.122 "num_base_bdevs_discovered": 0, 00:18:06.122 "num_base_bdevs_operational": 2, 00:18:06.122 "base_bdevs_list": [ 00:18:06.122 { 00:18:06.122 "name": "BaseBdev1", 00:18:06.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.122 "is_configured": false, 00:18:06.122 "data_offset": 0, 00:18:06.122 "data_size": 0 00:18:06.122 }, 00:18:06.122 { 00:18:06.122 "name": "BaseBdev2", 00:18:06.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.122 "is_configured": false, 00:18:06.122 "data_offset": 0, 00:18:06.122 "data_size": 0 00:18:06.122 } 00:18:06.122 ] 00:18:06.122 }' 00:18:06.122 09:17:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.122 09:17:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.381 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:06.381 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.381 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.381 [2024-10-15 09:17:24.234820] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:06.381 [2024-10-15 09:17:24.234937] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:06.381 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.381 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:06.381 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.381 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.381 [2024-10-15 09:17:24.246845] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:06.381 [2024-10-15 09:17:24.246943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:06.381 [2024-10-15 09:17:24.246972] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:06.381 [2024-10-15 09:17:24.246999] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:06.381 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.381 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:18:06.381 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.381 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.640 [2024-10-15 09:17:24.298207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:06.640 BaseBdev1 00:18:06.640 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.640 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:06.640 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:06.640 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:06.640 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:18:06.640 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:06.640 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:06.640 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:06.640 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.640 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.640 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.640 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:06.640 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.640 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.640 [ 00:18:06.640 { 00:18:06.640 "name": "BaseBdev1", 00:18:06.640 "aliases": [ 00:18:06.640 "957c4b8d-9a8f-4120-85cf-1a0c097fab93" 00:18:06.640 ], 00:18:06.640 "product_name": "Malloc disk", 00:18:06.640 "block_size": 4096, 00:18:06.640 "num_blocks": 8192, 00:18:06.640 "uuid": "957c4b8d-9a8f-4120-85cf-1a0c097fab93", 00:18:06.640 "md_size": 32, 00:18:06.640 "md_interleave": false, 00:18:06.640 "dif_type": 0, 00:18:06.640 "assigned_rate_limits": { 00:18:06.640 "rw_ios_per_sec": 0, 00:18:06.640 "rw_mbytes_per_sec": 0, 00:18:06.640 "r_mbytes_per_sec": 0, 00:18:06.640 "w_mbytes_per_sec": 0 00:18:06.640 }, 00:18:06.640 "claimed": true, 00:18:06.640 "claim_type": "exclusive_write", 00:18:06.640 "zoned": false, 00:18:06.640 "supported_io_types": { 00:18:06.640 "read": true, 00:18:06.640 "write": true, 00:18:06.640 "unmap": true, 00:18:06.640 "flush": true, 00:18:06.640 "reset": true, 00:18:06.640 "nvme_admin": false, 00:18:06.640 "nvme_io": false, 00:18:06.640 "nvme_io_md": false, 00:18:06.640 "write_zeroes": true, 00:18:06.640 "zcopy": true, 00:18:06.640 "get_zone_info": false, 00:18:06.640 "zone_management": false, 00:18:06.640 "zone_append": false, 00:18:06.640 "compare": false, 00:18:06.640 "compare_and_write": false, 00:18:06.640 "abort": true, 00:18:06.640 "seek_hole": false, 00:18:06.640 "seek_data": false, 00:18:06.640 "copy": true, 00:18:06.640 "nvme_iov_md": false 00:18:06.640 }, 00:18:06.640 "memory_domains": [ 00:18:06.640 { 00:18:06.640 "dma_device_id": "system", 00:18:06.640 "dma_device_type": 1 00:18:06.640 }, 00:18:06.640 { 00:18:06.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.641 "dma_device_type": 2 00:18:06.641 } 00:18:06.641 ], 00:18:06.641 "driver_specific": {} 00:18:06.641 } 00:18:06.641 ] 00:18:06.641 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.641 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:18:06.641 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:06.641 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:06.641 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:06.641 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.641 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.641 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:06.641 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.641 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.641 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.641 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.641 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.641 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.641 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.641 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.641 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.641 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.641 "name": "Existed_Raid", 00:18:06.641 "uuid": "e13b477a-4d09-4ba0-94b4-2462e3ea6e06", 00:18:06.641 "strip_size_kb": 0, 00:18:06.641 "state": "configuring", 00:18:06.641 "raid_level": "raid1", 00:18:06.641 "superblock": true, 00:18:06.641 "num_base_bdevs": 2, 00:18:06.641 "num_base_bdevs_discovered": 1, 00:18:06.641 "num_base_bdevs_operational": 2, 00:18:06.641 "base_bdevs_list": [ 00:18:06.641 { 00:18:06.641 "name": "BaseBdev1", 00:18:06.641 "uuid": "957c4b8d-9a8f-4120-85cf-1a0c097fab93", 00:18:06.641 "is_configured": true, 00:18:06.641 "data_offset": 256, 00:18:06.641 "data_size": 7936 00:18:06.641 }, 00:18:06.641 { 00:18:06.641 "name": "BaseBdev2", 00:18:06.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.641 "is_configured": false, 00:18:06.641 "data_offset": 0, 00:18:06.641 "data_size": 0 00:18:06.641 } 00:18:06.641 ] 00:18:06.641 }' 00:18:06.641 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.641 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.901 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:06.901 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.901 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.901 [2024-10-15 09:17:24.773566] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:06.901 [2024-10-15 09:17:24.773643] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:06.901 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.901 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:06.901 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.901 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.901 [2024-10-15 09:17:24.785618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:06.901 [2024-10-15 09:17:24.787872] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:06.901 [2024-10-15 09:17:24.787924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:06.901 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.901 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:06.901 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:06.901 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:06.901 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:06.901 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:06.901 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.901 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.901 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:06.901 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.901 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.901 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.901 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.901 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.901 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.901 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.901 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.182 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.182 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.182 "name": "Existed_Raid", 00:18:07.183 "uuid": "2de0cb82-0b30-4f3a-ae58-18af396e54ed", 00:18:07.183 "strip_size_kb": 0, 00:18:07.183 "state": "configuring", 00:18:07.183 "raid_level": "raid1", 00:18:07.183 "superblock": true, 00:18:07.183 "num_base_bdevs": 2, 00:18:07.183 "num_base_bdevs_discovered": 1, 00:18:07.183 "num_base_bdevs_operational": 2, 00:18:07.183 "base_bdevs_list": [ 00:18:07.183 { 00:18:07.183 "name": "BaseBdev1", 00:18:07.183 "uuid": "957c4b8d-9a8f-4120-85cf-1a0c097fab93", 00:18:07.183 "is_configured": true, 00:18:07.183 "data_offset": 256, 00:18:07.183 "data_size": 7936 00:18:07.183 }, 00:18:07.183 { 00:18:07.183 "name": "BaseBdev2", 00:18:07.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.183 "is_configured": false, 00:18:07.183 "data_offset": 0, 00:18:07.183 "data_size": 0 00:18:07.183 } 00:18:07.183 ] 00:18:07.183 }' 00:18:07.183 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.183 09:17:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.442 [2024-10-15 09:17:25.195971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:07.442 [2024-10-15 09:17:25.196330] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:07.442 [2024-10-15 09:17:25.196348] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:07.442 [2024-10-15 09:17:25.196432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:07.442 [2024-10-15 09:17:25.196549] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:07.442 [2024-10-15 09:17:25.196562] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:07.442 [2024-10-15 09:17:25.196671] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.442 BaseBdev2 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.442 [ 00:18:07.442 { 00:18:07.442 "name": "BaseBdev2", 00:18:07.442 "aliases": [ 00:18:07.442 "9f722874-27f8-4fe3-a945-e9dad7eba810" 00:18:07.442 ], 00:18:07.442 "product_name": "Malloc disk", 00:18:07.442 "block_size": 4096, 00:18:07.442 "num_blocks": 8192, 00:18:07.442 "uuid": "9f722874-27f8-4fe3-a945-e9dad7eba810", 00:18:07.442 "md_size": 32, 00:18:07.442 "md_interleave": false, 00:18:07.442 "dif_type": 0, 00:18:07.442 "assigned_rate_limits": { 00:18:07.442 "rw_ios_per_sec": 0, 00:18:07.442 "rw_mbytes_per_sec": 0, 00:18:07.442 "r_mbytes_per_sec": 0, 00:18:07.442 "w_mbytes_per_sec": 0 00:18:07.442 }, 00:18:07.442 "claimed": true, 00:18:07.442 "claim_type": "exclusive_write", 00:18:07.442 "zoned": false, 00:18:07.442 "supported_io_types": { 00:18:07.442 "read": true, 00:18:07.442 "write": true, 00:18:07.442 "unmap": true, 00:18:07.442 "flush": true, 00:18:07.442 "reset": true, 00:18:07.442 "nvme_admin": false, 00:18:07.442 "nvme_io": false, 00:18:07.442 "nvme_io_md": false, 00:18:07.442 "write_zeroes": true, 00:18:07.442 "zcopy": true, 00:18:07.442 "get_zone_info": false, 00:18:07.442 "zone_management": false, 00:18:07.442 "zone_append": false, 00:18:07.442 "compare": false, 00:18:07.442 "compare_and_write": false, 00:18:07.442 "abort": true, 00:18:07.442 "seek_hole": false, 00:18:07.442 "seek_data": false, 00:18:07.442 "copy": true, 00:18:07.442 "nvme_iov_md": false 00:18:07.442 }, 00:18:07.442 "memory_domains": [ 00:18:07.442 { 00:18:07.442 "dma_device_id": "system", 00:18:07.442 "dma_device_type": 1 00:18:07.442 }, 00:18:07.442 { 00:18:07.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.442 "dma_device_type": 2 00:18:07.442 } 00:18:07.442 ], 00:18:07.442 "driver_specific": {} 00:18:07.442 } 00:18:07.442 ] 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.442 "name": "Existed_Raid", 00:18:07.442 "uuid": "2de0cb82-0b30-4f3a-ae58-18af396e54ed", 00:18:07.442 "strip_size_kb": 0, 00:18:07.442 "state": "online", 00:18:07.442 "raid_level": "raid1", 00:18:07.442 "superblock": true, 00:18:07.442 "num_base_bdevs": 2, 00:18:07.442 "num_base_bdevs_discovered": 2, 00:18:07.442 "num_base_bdevs_operational": 2, 00:18:07.442 "base_bdevs_list": [ 00:18:07.442 { 00:18:07.442 "name": "BaseBdev1", 00:18:07.442 "uuid": "957c4b8d-9a8f-4120-85cf-1a0c097fab93", 00:18:07.442 "is_configured": true, 00:18:07.442 "data_offset": 256, 00:18:07.442 "data_size": 7936 00:18:07.442 }, 00:18:07.442 { 00:18:07.442 "name": "BaseBdev2", 00:18:07.442 "uuid": "9f722874-27f8-4fe3-a945-e9dad7eba810", 00:18:07.442 "is_configured": true, 00:18:07.442 "data_offset": 256, 00:18:07.442 "data_size": 7936 00:18:07.442 } 00:18:07.442 ] 00:18:07.442 }' 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.442 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.012 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:08.012 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:08.012 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:08.012 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:08.012 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:08.012 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:08.012 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:08.012 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:08.012 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.012 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.012 [2024-10-15 09:17:25.731496] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:08.012 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.012 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:08.012 "name": "Existed_Raid", 00:18:08.012 "aliases": [ 00:18:08.012 "2de0cb82-0b30-4f3a-ae58-18af396e54ed" 00:18:08.012 ], 00:18:08.012 "product_name": "Raid Volume", 00:18:08.012 "block_size": 4096, 00:18:08.012 "num_blocks": 7936, 00:18:08.012 "uuid": "2de0cb82-0b30-4f3a-ae58-18af396e54ed", 00:18:08.012 "md_size": 32, 00:18:08.012 "md_interleave": false, 00:18:08.012 "dif_type": 0, 00:18:08.012 "assigned_rate_limits": { 00:18:08.012 "rw_ios_per_sec": 0, 00:18:08.012 "rw_mbytes_per_sec": 0, 00:18:08.012 "r_mbytes_per_sec": 0, 00:18:08.012 "w_mbytes_per_sec": 0 00:18:08.012 }, 00:18:08.012 "claimed": false, 00:18:08.012 "zoned": false, 00:18:08.012 "supported_io_types": { 00:18:08.012 "read": true, 00:18:08.012 "write": true, 00:18:08.012 "unmap": false, 00:18:08.012 "flush": false, 00:18:08.012 "reset": true, 00:18:08.012 "nvme_admin": false, 00:18:08.012 "nvme_io": false, 00:18:08.012 "nvme_io_md": false, 00:18:08.012 "write_zeroes": true, 00:18:08.012 "zcopy": false, 00:18:08.012 "get_zone_info": false, 00:18:08.012 "zone_management": false, 00:18:08.012 "zone_append": false, 00:18:08.012 "compare": false, 00:18:08.012 "compare_and_write": false, 00:18:08.012 "abort": false, 00:18:08.012 "seek_hole": false, 00:18:08.012 "seek_data": false, 00:18:08.012 "copy": false, 00:18:08.012 "nvme_iov_md": false 00:18:08.012 }, 00:18:08.012 "memory_domains": [ 00:18:08.012 { 00:18:08.012 "dma_device_id": "system", 00:18:08.012 "dma_device_type": 1 00:18:08.012 }, 00:18:08.012 { 00:18:08.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.012 "dma_device_type": 2 00:18:08.012 }, 00:18:08.012 { 00:18:08.012 "dma_device_id": "system", 00:18:08.012 "dma_device_type": 1 00:18:08.012 }, 00:18:08.012 { 00:18:08.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.012 "dma_device_type": 2 00:18:08.012 } 00:18:08.012 ], 00:18:08.012 "driver_specific": { 00:18:08.012 "raid": { 00:18:08.012 "uuid": "2de0cb82-0b30-4f3a-ae58-18af396e54ed", 00:18:08.012 "strip_size_kb": 0, 00:18:08.012 "state": "online", 00:18:08.012 "raid_level": "raid1", 00:18:08.012 "superblock": true, 00:18:08.012 "num_base_bdevs": 2, 00:18:08.012 "num_base_bdevs_discovered": 2, 00:18:08.012 "num_base_bdevs_operational": 2, 00:18:08.012 "base_bdevs_list": [ 00:18:08.012 { 00:18:08.012 "name": "BaseBdev1", 00:18:08.012 "uuid": "957c4b8d-9a8f-4120-85cf-1a0c097fab93", 00:18:08.012 "is_configured": true, 00:18:08.012 "data_offset": 256, 00:18:08.012 "data_size": 7936 00:18:08.012 }, 00:18:08.012 { 00:18:08.012 "name": "BaseBdev2", 00:18:08.012 "uuid": "9f722874-27f8-4fe3-a945-e9dad7eba810", 00:18:08.012 "is_configured": true, 00:18:08.012 "data_offset": 256, 00:18:08.012 "data_size": 7936 00:18:08.012 } 00:18:08.012 ] 00:18:08.012 } 00:18:08.012 } 00:18:08.012 }' 00:18:08.012 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:08.012 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:08.012 BaseBdev2' 00:18:08.012 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:08.012 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:08.012 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:08.012 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:08.012 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:08.012 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.012 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.012 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.272 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:08.272 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:08.272 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:08.272 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:08.272 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:08.272 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.272 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.272 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.272 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:08.272 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:08.272 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:08.272 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.272 09:17:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.272 [2024-10-15 09:17:25.982832] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:08.272 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.272 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:08.272 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:08.272 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:08.272 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:08.272 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:08.272 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:08.272 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:08.272 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.272 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.272 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.272 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:08.272 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.272 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.272 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.272 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.272 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.272 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.272 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.272 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.272 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.272 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.272 "name": "Existed_Raid", 00:18:08.272 "uuid": "2de0cb82-0b30-4f3a-ae58-18af396e54ed", 00:18:08.272 "strip_size_kb": 0, 00:18:08.272 "state": "online", 00:18:08.272 "raid_level": "raid1", 00:18:08.272 "superblock": true, 00:18:08.272 "num_base_bdevs": 2, 00:18:08.272 "num_base_bdevs_discovered": 1, 00:18:08.272 "num_base_bdevs_operational": 1, 00:18:08.272 "base_bdevs_list": [ 00:18:08.272 { 00:18:08.272 "name": null, 00:18:08.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.272 "is_configured": false, 00:18:08.272 "data_offset": 0, 00:18:08.272 "data_size": 7936 00:18:08.272 }, 00:18:08.272 { 00:18:08.272 "name": "BaseBdev2", 00:18:08.272 "uuid": "9f722874-27f8-4fe3-a945-e9dad7eba810", 00:18:08.272 "is_configured": true, 00:18:08.272 "data_offset": 256, 00:18:08.272 "data_size": 7936 00:18:08.272 } 00:18:08.272 ] 00:18:08.272 }' 00:18:08.272 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.272 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.839 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:08.839 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:08.839 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.839 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:08.839 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.839 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.839 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.839 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:08.839 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:08.839 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:08.839 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.839 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.839 [2024-10-15 09:17:26.567045] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:08.839 [2024-10-15 09:17:26.567213] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:08.839 [2024-10-15 09:17:26.680194] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:08.839 [2024-10-15 09:17:26.680252] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:08.839 [2024-10-15 09:17:26.680265] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:08.839 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.839 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:08.839 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:08.839 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.839 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.839 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:08.839 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.839 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.839 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:08.839 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:08.839 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:08.839 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87523 00:18:08.839 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 87523 ']' 00:18:08.839 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 87523 00:18:09.098 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:18:09.098 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:09.098 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87523 00:18:09.098 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:09.098 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:09.098 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87523' 00:18:09.098 killing process with pid 87523 00:18:09.098 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 87523 00:18:09.098 [2024-10-15 09:17:26.767865] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:09.098 09:17:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 87523 00:18:09.098 [2024-10-15 09:17:26.789054] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:10.477 09:17:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:18:10.477 00:18:10.477 real 0m5.217s 00:18:10.477 user 0m7.464s 00:18:10.477 sys 0m0.876s 00:18:10.477 ************************************ 00:18:10.477 END TEST raid_state_function_test_sb_md_separate 00:18:10.477 ************************************ 00:18:10.477 09:17:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:10.477 09:17:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.477 09:17:28 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:18:10.477 09:17:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:10.477 09:17:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:10.477 09:17:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:10.477 ************************************ 00:18:10.477 START TEST raid_superblock_test_md_separate 00:18:10.477 ************************************ 00:18:10.477 09:17:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:18:10.477 09:17:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:10.477 09:17:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:10.477 09:17:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:10.477 09:17:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:10.477 09:17:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:10.477 09:17:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:10.477 09:17:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:10.477 09:17:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:10.477 09:17:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:10.477 09:17:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:10.477 09:17:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:10.477 09:17:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:10.477 09:17:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:10.477 09:17:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:10.477 09:17:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:10.477 09:17:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87771 00:18:10.477 09:17:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:10.477 09:17:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87771 00:18:10.477 09:17:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 87771 ']' 00:18:10.477 09:17:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.477 09:17:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:10.477 09:17:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.477 09:17:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:10.477 09:17:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.477 [2024-10-15 09:17:28.175648] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:18:10.477 [2024-10-15 09:17:28.175893] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87771 ] 00:18:10.477 [2024-10-15 09:17:28.327553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.737 [2024-10-15 09:17:28.454081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.997 [2024-10-15 09:17:28.661698] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:10.997 [2024-10-15 09:17:28.661852] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:11.257 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:11.257 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:18:11.257 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:11.257 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:11.257 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:11.257 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:11.257 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:11.257 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:11.257 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:11.257 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:11.257 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:18:11.257 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.257 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.257 malloc1 00:18:11.257 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.257 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:11.257 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.257 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.257 [2024-10-15 09:17:29.116412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:11.257 [2024-10-15 09:17:29.116480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.257 [2024-10-15 09:17:29.116504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:11.257 [2024-10-15 09:17:29.116515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.257 [2024-10-15 09:17:29.118732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.257 [2024-10-15 09:17:29.118819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:11.257 pt1 00:18:11.257 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.257 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:11.257 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:11.257 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:11.257 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:11.257 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:11.257 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:11.257 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:11.257 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:11.257 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:18:11.257 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.257 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.517 malloc2 00:18:11.517 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.517 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:11.517 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.517 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.517 [2024-10-15 09:17:29.177054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:11.517 [2024-10-15 09:17:29.177153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.517 [2024-10-15 09:17:29.177194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:11.517 [2024-10-15 09:17:29.177222] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.517 [2024-10-15 09:17:29.179314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.517 [2024-10-15 09:17:29.179383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:11.517 pt2 00:18:11.517 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.517 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:11.517 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:11.517 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:11.517 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.517 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.517 [2024-10-15 09:17:29.189083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:11.517 [2024-10-15 09:17:29.190975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:11.517 [2024-10-15 09:17:29.191210] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:11.518 [2024-10-15 09:17:29.191255] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:11.518 [2024-10-15 09:17:29.191354] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:11.518 [2024-10-15 09:17:29.191528] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:11.518 [2024-10-15 09:17:29.191570] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:11.518 [2024-10-15 09:17:29.191733] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:11.518 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.518 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:11.518 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.518 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.518 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:11.518 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:11.518 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:11.518 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.518 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.518 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.518 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.518 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.518 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.518 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.518 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.518 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.518 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.518 "name": "raid_bdev1", 00:18:11.518 "uuid": "23d6087c-8614-476b-abbe-5da18b92a559", 00:18:11.518 "strip_size_kb": 0, 00:18:11.518 "state": "online", 00:18:11.518 "raid_level": "raid1", 00:18:11.518 "superblock": true, 00:18:11.518 "num_base_bdevs": 2, 00:18:11.518 "num_base_bdevs_discovered": 2, 00:18:11.518 "num_base_bdevs_operational": 2, 00:18:11.518 "base_bdevs_list": [ 00:18:11.518 { 00:18:11.518 "name": "pt1", 00:18:11.518 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:11.518 "is_configured": true, 00:18:11.518 "data_offset": 256, 00:18:11.518 "data_size": 7936 00:18:11.518 }, 00:18:11.518 { 00:18:11.518 "name": "pt2", 00:18:11.518 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:11.518 "is_configured": true, 00:18:11.518 "data_offset": 256, 00:18:11.518 "data_size": 7936 00:18:11.518 } 00:18:11.518 ] 00:18:11.518 }' 00:18:11.518 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.518 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.778 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:11.778 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:11.778 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:11.778 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:11.778 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:11.778 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:11.778 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:11.778 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:11.778 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.778 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.778 [2024-10-15 09:17:29.660634] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:12.038 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.038 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:12.038 "name": "raid_bdev1", 00:18:12.038 "aliases": [ 00:18:12.038 "23d6087c-8614-476b-abbe-5da18b92a559" 00:18:12.038 ], 00:18:12.038 "product_name": "Raid Volume", 00:18:12.038 "block_size": 4096, 00:18:12.038 "num_blocks": 7936, 00:18:12.038 "uuid": "23d6087c-8614-476b-abbe-5da18b92a559", 00:18:12.038 "md_size": 32, 00:18:12.038 "md_interleave": false, 00:18:12.038 "dif_type": 0, 00:18:12.038 "assigned_rate_limits": { 00:18:12.038 "rw_ios_per_sec": 0, 00:18:12.038 "rw_mbytes_per_sec": 0, 00:18:12.038 "r_mbytes_per_sec": 0, 00:18:12.038 "w_mbytes_per_sec": 0 00:18:12.038 }, 00:18:12.038 "claimed": false, 00:18:12.038 "zoned": false, 00:18:12.038 "supported_io_types": { 00:18:12.038 "read": true, 00:18:12.038 "write": true, 00:18:12.038 "unmap": false, 00:18:12.038 "flush": false, 00:18:12.038 "reset": true, 00:18:12.038 "nvme_admin": false, 00:18:12.038 "nvme_io": false, 00:18:12.038 "nvme_io_md": false, 00:18:12.038 "write_zeroes": true, 00:18:12.038 "zcopy": false, 00:18:12.038 "get_zone_info": false, 00:18:12.038 "zone_management": false, 00:18:12.038 "zone_append": false, 00:18:12.038 "compare": false, 00:18:12.038 "compare_and_write": false, 00:18:12.038 "abort": false, 00:18:12.038 "seek_hole": false, 00:18:12.038 "seek_data": false, 00:18:12.038 "copy": false, 00:18:12.038 "nvme_iov_md": false 00:18:12.038 }, 00:18:12.038 "memory_domains": [ 00:18:12.038 { 00:18:12.038 "dma_device_id": "system", 00:18:12.038 "dma_device_type": 1 00:18:12.038 }, 00:18:12.038 { 00:18:12.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.038 "dma_device_type": 2 00:18:12.038 }, 00:18:12.038 { 00:18:12.038 "dma_device_id": "system", 00:18:12.038 "dma_device_type": 1 00:18:12.038 }, 00:18:12.038 { 00:18:12.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.038 "dma_device_type": 2 00:18:12.038 } 00:18:12.038 ], 00:18:12.038 "driver_specific": { 00:18:12.038 "raid": { 00:18:12.038 "uuid": "23d6087c-8614-476b-abbe-5da18b92a559", 00:18:12.038 "strip_size_kb": 0, 00:18:12.038 "state": "online", 00:18:12.038 "raid_level": "raid1", 00:18:12.038 "superblock": true, 00:18:12.038 "num_base_bdevs": 2, 00:18:12.038 "num_base_bdevs_discovered": 2, 00:18:12.038 "num_base_bdevs_operational": 2, 00:18:12.038 "base_bdevs_list": [ 00:18:12.038 { 00:18:12.038 "name": "pt1", 00:18:12.038 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:12.038 "is_configured": true, 00:18:12.038 "data_offset": 256, 00:18:12.038 "data_size": 7936 00:18:12.038 }, 00:18:12.038 { 00:18:12.038 "name": "pt2", 00:18:12.038 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:12.038 "is_configured": true, 00:18:12.038 "data_offset": 256, 00:18:12.038 "data_size": 7936 00:18:12.038 } 00:18:12.038 ] 00:18:12.038 } 00:18:12.038 } 00:18:12.038 }' 00:18:12.038 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:12.038 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:12.038 pt2' 00:18:12.038 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:12.038 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:12.038 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:12.038 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:12.038 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.038 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.038 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:12.038 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.038 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:12.038 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:12.038 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:12.038 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:12.038 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:12.038 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.038 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.038 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.038 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:12.038 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:12.038 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:12.038 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:12.038 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.038 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.038 [2024-10-15 09:17:29.900221] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:12.038 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.297 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=23d6087c-8614-476b-abbe-5da18b92a559 00:18:12.297 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 23d6087c-8614-476b-abbe-5da18b92a559 ']' 00:18:12.297 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:12.297 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.297 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.297 [2024-10-15 09:17:29.943808] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:12.297 [2024-10-15 09:17:29.943844] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:12.297 [2024-10-15 09:17:29.943947] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:12.297 [2024-10-15 09:17:29.944018] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:12.297 [2024-10-15 09:17:29.944035] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:12.297 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.297 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.297 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.297 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.297 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:12.297 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.297 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:12.297 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:12.297 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:12.297 09:17:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:12.297 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.297 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.297 09:17:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.297 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:12.297 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:12.297 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.297 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.297 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.297 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:12.297 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.297 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:12.297 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.297 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.297 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:12.297 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:12.297 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:18:12.297 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:12.297 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:12.297 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:12.297 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:12.297 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:12.297 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:12.297 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.297 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.297 [2024-10-15 09:17:30.079639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:12.297 [2024-10-15 09:17:30.081747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:12.297 [2024-10-15 09:17:30.081838] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:12.297 [2024-10-15 09:17:30.081901] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:12.297 [2024-10-15 09:17:30.081918] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:12.297 [2024-10-15 09:17:30.081930] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:12.297 request: 00:18:12.297 { 00:18:12.297 "name": "raid_bdev1", 00:18:12.297 "raid_level": "raid1", 00:18:12.297 "base_bdevs": [ 00:18:12.297 "malloc1", 00:18:12.297 "malloc2" 00:18:12.297 ], 00:18:12.297 "superblock": false, 00:18:12.297 "method": "bdev_raid_create", 00:18:12.297 "req_id": 1 00:18:12.297 } 00:18:12.297 Got JSON-RPC error response 00:18:12.297 response: 00:18:12.297 { 00:18:12.297 "code": -17, 00:18:12.297 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:12.297 } 00:18:12.298 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:12.298 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:18:12.298 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:12.298 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:12.298 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:12.298 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:12.298 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.298 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.298 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.298 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.298 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:12.298 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:12.298 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:12.298 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.298 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.298 [2024-10-15 09:17:30.139465] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:12.298 [2024-10-15 09:17:30.139538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.298 [2024-10-15 09:17:30.139557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:12.298 [2024-10-15 09:17:30.139568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.298 [2024-10-15 09:17:30.141642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.298 [2024-10-15 09:17:30.141700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:12.298 [2024-10-15 09:17:30.141766] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:12.298 [2024-10-15 09:17:30.141832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:12.298 pt1 00:18:12.298 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.298 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:12.298 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.298 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:12.298 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:12.298 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:12.298 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:12.298 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.298 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.298 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.298 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.298 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.298 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.298 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.298 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.298 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.555 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.555 "name": "raid_bdev1", 00:18:12.555 "uuid": "23d6087c-8614-476b-abbe-5da18b92a559", 00:18:12.555 "strip_size_kb": 0, 00:18:12.556 "state": "configuring", 00:18:12.556 "raid_level": "raid1", 00:18:12.556 "superblock": true, 00:18:12.556 "num_base_bdevs": 2, 00:18:12.556 "num_base_bdevs_discovered": 1, 00:18:12.556 "num_base_bdevs_operational": 2, 00:18:12.556 "base_bdevs_list": [ 00:18:12.556 { 00:18:12.556 "name": "pt1", 00:18:12.556 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:12.556 "is_configured": true, 00:18:12.556 "data_offset": 256, 00:18:12.556 "data_size": 7936 00:18:12.556 }, 00:18:12.556 { 00:18:12.556 "name": null, 00:18:12.556 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:12.556 "is_configured": false, 00:18:12.556 "data_offset": 256, 00:18:12.556 "data_size": 7936 00:18:12.556 } 00:18:12.556 ] 00:18:12.556 }' 00:18:12.556 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.556 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.815 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:12.815 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:12.815 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:12.815 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:12.815 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.815 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.815 [2024-10-15 09:17:30.602766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:12.815 [2024-10-15 09:17:30.602847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.815 [2024-10-15 09:17:30.602873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:12.815 [2024-10-15 09:17:30.602886] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.815 [2024-10-15 09:17:30.603165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.815 [2024-10-15 09:17:30.603191] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:12.815 [2024-10-15 09:17:30.603256] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:12.816 [2024-10-15 09:17:30.603284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:12.816 [2024-10-15 09:17:30.603422] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:12.816 [2024-10-15 09:17:30.603439] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:12.816 [2024-10-15 09:17:30.603520] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:12.816 [2024-10-15 09:17:30.603662] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:12.816 [2024-10-15 09:17:30.603674] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:12.816 [2024-10-15 09:17:30.603827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.816 pt2 00:18:12.816 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.816 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:12.816 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:12.816 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:12.816 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.816 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.816 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:12.816 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:12.816 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:12.816 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.816 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.816 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.816 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.816 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.816 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.816 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.816 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.816 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.816 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.816 "name": "raid_bdev1", 00:18:12.816 "uuid": "23d6087c-8614-476b-abbe-5da18b92a559", 00:18:12.816 "strip_size_kb": 0, 00:18:12.816 "state": "online", 00:18:12.816 "raid_level": "raid1", 00:18:12.816 "superblock": true, 00:18:12.816 "num_base_bdevs": 2, 00:18:12.816 "num_base_bdevs_discovered": 2, 00:18:12.816 "num_base_bdevs_operational": 2, 00:18:12.816 "base_bdevs_list": [ 00:18:12.816 { 00:18:12.816 "name": "pt1", 00:18:12.816 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:12.816 "is_configured": true, 00:18:12.816 "data_offset": 256, 00:18:12.816 "data_size": 7936 00:18:12.816 }, 00:18:12.816 { 00:18:12.816 "name": "pt2", 00:18:12.816 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:12.816 "is_configured": true, 00:18:12.816 "data_offset": 256, 00:18:12.816 "data_size": 7936 00:18:12.816 } 00:18:12.816 ] 00:18:12.816 }' 00:18:12.816 09:17:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.816 09:17:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.401 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:13.401 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:13.401 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:13.401 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:13.401 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:13.401 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:13.401 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:13.401 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:13.401 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.401 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.401 [2024-10-15 09:17:31.070300] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:13.401 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.401 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:13.401 "name": "raid_bdev1", 00:18:13.401 "aliases": [ 00:18:13.401 "23d6087c-8614-476b-abbe-5da18b92a559" 00:18:13.401 ], 00:18:13.401 "product_name": "Raid Volume", 00:18:13.401 "block_size": 4096, 00:18:13.401 "num_blocks": 7936, 00:18:13.401 "uuid": "23d6087c-8614-476b-abbe-5da18b92a559", 00:18:13.401 "md_size": 32, 00:18:13.401 "md_interleave": false, 00:18:13.401 "dif_type": 0, 00:18:13.401 "assigned_rate_limits": { 00:18:13.401 "rw_ios_per_sec": 0, 00:18:13.401 "rw_mbytes_per_sec": 0, 00:18:13.401 "r_mbytes_per_sec": 0, 00:18:13.401 "w_mbytes_per_sec": 0 00:18:13.401 }, 00:18:13.401 "claimed": false, 00:18:13.401 "zoned": false, 00:18:13.401 "supported_io_types": { 00:18:13.401 "read": true, 00:18:13.401 "write": true, 00:18:13.401 "unmap": false, 00:18:13.401 "flush": false, 00:18:13.401 "reset": true, 00:18:13.401 "nvme_admin": false, 00:18:13.401 "nvme_io": false, 00:18:13.401 "nvme_io_md": false, 00:18:13.401 "write_zeroes": true, 00:18:13.401 "zcopy": false, 00:18:13.401 "get_zone_info": false, 00:18:13.401 "zone_management": false, 00:18:13.401 "zone_append": false, 00:18:13.401 "compare": false, 00:18:13.401 "compare_and_write": false, 00:18:13.401 "abort": false, 00:18:13.401 "seek_hole": false, 00:18:13.401 "seek_data": false, 00:18:13.401 "copy": false, 00:18:13.401 "nvme_iov_md": false 00:18:13.401 }, 00:18:13.401 "memory_domains": [ 00:18:13.401 { 00:18:13.401 "dma_device_id": "system", 00:18:13.401 "dma_device_type": 1 00:18:13.401 }, 00:18:13.401 { 00:18:13.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.401 "dma_device_type": 2 00:18:13.401 }, 00:18:13.401 { 00:18:13.401 "dma_device_id": "system", 00:18:13.401 "dma_device_type": 1 00:18:13.401 }, 00:18:13.401 { 00:18:13.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.401 "dma_device_type": 2 00:18:13.401 } 00:18:13.401 ], 00:18:13.401 "driver_specific": { 00:18:13.401 "raid": { 00:18:13.401 "uuid": "23d6087c-8614-476b-abbe-5da18b92a559", 00:18:13.401 "strip_size_kb": 0, 00:18:13.401 "state": "online", 00:18:13.401 "raid_level": "raid1", 00:18:13.401 "superblock": true, 00:18:13.401 "num_base_bdevs": 2, 00:18:13.401 "num_base_bdevs_discovered": 2, 00:18:13.401 "num_base_bdevs_operational": 2, 00:18:13.401 "base_bdevs_list": [ 00:18:13.401 { 00:18:13.401 "name": "pt1", 00:18:13.401 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:13.401 "is_configured": true, 00:18:13.401 "data_offset": 256, 00:18:13.401 "data_size": 7936 00:18:13.401 }, 00:18:13.401 { 00:18:13.401 "name": "pt2", 00:18:13.401 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:13.402 "is_configured": true, 00:18:13.402 "data_offset": 256, 00:18:13.402 "data_size": 7936 00:18:13.402 } 00:18:13.402 ] 00:18:13.402 } 00:18:13.402 } 00:18:13.402 }' 00:18:13.402 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:13.402 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:13.402 pt2' 00:18:13.402 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:13.402 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:13.402 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:13.402 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:13.402 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:13.402 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.402 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.402 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.402 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:13.402 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:13.402 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:13.402 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:13.402 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.402 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.402 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:13.402 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.662 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:13.662 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:13.662 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:13.662 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:13.662 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.662 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.662 [2024-10-15 09:17:31.333875] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:13.662 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.662 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 23d6087c-8614-476b-abbe-5da18b92a559 '!=' 23d6087c-8614-476b-abbe-5da18b92a559 ']' 00:18:13.662 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:13.662 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:13.662 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:13.662 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:13.662 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.662 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.662 [2024-10-15 09:17:31.369579] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:13.662 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.662 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:13.662 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.662 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.662 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.662 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.662 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:13.662 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.662 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.662 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.662 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.662 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.662 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.662 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.662 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.662 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.662 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.662 "name": "raid_bdev1", 00:18:13.662 "uuid": "23d6087c-8614-476b-abbe-5da18b92a559", 00:18:13.662 "strip_size_kb": 0, 00:18:13.662 "state": "online", 00:18:13.662 "raid_level": "raid1", 00:18:13.662 "superblock": true, 00:18:13.662 "num_base_bdevs": 2, 00:18:13.662 "num_base_bdevs_discovered": 1, 00:18:13.662 "num_base_bdevs_operational": 1, 00:18:13.662 "base_bdevs_list": [ 00:18:13.662 { 00:18:13.662 "name": null, 00:18:13.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.662 "is_configured": false, 00:18:13.662 "data_offset": 0, 00:18:13.662 "data_size": 7936 00:18:13.662 }, 00:18:13.662 { 00:18:13.662 "name": "pt2", 00:18:13.662 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:13.662 "is_configured": true, 00:18:13.662 "data_offset": 256, 00:18:13.662 "data_size": 7936 00:18:13.662 } 00:18:13.662 ] 00:18:13.662 }' 00:18:13.662 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.662 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.231 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:14.231 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.231 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.231 [2024-10-15 09:17:31.844731] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:14.231 [2024-10-15 09:17:31.844763] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:14.231 [2024-10-15 09:17:31.844847] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:14.231 [2024-10-15 09:17:31.844913] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:14.231 [2024-10-15 09:17:31.844930] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:14.231 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.231 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.231 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.231 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.231 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:14.231 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.231 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:14.231 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:14.231 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:14.231 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:14.231 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:14.231 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.231 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.231 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.231 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:14.231 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:14.231 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:14.231 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:14.231 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:18:14.231 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:14.231 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.231 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.231 [2024-10-15 09:17:31.924564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:14.231 [2024-10-15 09:17:31.924626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.231 [2024-10-15 09:17:31.924642] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:14.231 [2024-10-15 09:17:31.924653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.231 [2024-10-15 09:17:31.926874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.231 [2024-10-15 09:17:31.926918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:14.231 [2024-10-15 09:17:31.926976] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:14.231 [2024-10-15 09:17:31.927028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:14.231 [2024-10-15 09:17:31.927133] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:14.231 [2024-10-15 09:17:31.927153] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:14.231 [2024-10-15 09:17:31.927236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:14.231 [2024-10-15 09:17:31.927368] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:14.231 [2024-10-15 09:17:31.927380] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:14.231 [2024-10-15 09:17:31.927488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.231 pt2 00:18:14.231 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.231 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:14.231 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.231 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.231 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.231 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.231 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:14.232 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.232 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.232 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.232 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.232 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.232 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.232 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.232 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.232 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.232 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.232 "name": "raid_bdev1", 00:18:14.232 "uuid": "23d6087c-8614-476b-abbe-5da18b92a559", 00:18:14.232 "strip_size_kb": 0, 00:18:14.232 "state": "online", 00:18:14.232 "raid_level": "raid1", 00:18:14.232 "superblock": true, 00:18:14.232 "num_base_bdevs": 2, 00:18:14.232 "num_base_bdevs_discovered": 1, 00:18:14.232 "num_base_bdevs_operational": 1, 00:18:14.232 "base_bdevs_list": [ 00:18:14.232 { 00:18:14.232 "name": null, 00:18:14.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.232 "is_configured": false, 00:18:14.232 "data_offset": 256, 00:18:14.232 "data_size": 7936 00:18:14.232 }, 00:18:14.232 { 00:18:14.232 "name": "pt2", 00:18:14.232 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:14.232 "is_configured": true, 00:18:14.232 "data_offset": 256, 00:18:14.232 "data_size": 7936 00:18:14.232 } 00:18:14.232 ] 00:18:14.232 }' 00:18:14.232 09:17:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.232 09:17:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.490 09:17:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:14.490 09:17:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.490 09:17:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.490 [2024-10-15 09:17:32.375827] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:14.490 [2024-10-15 09:17:32.375865] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:14.490 [2024-10-15 09:17:32.375951] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:14.490 [2024-10-15 09:17:32.376036] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:14.490 [2024-10-15 09:17:32.376061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:14.490 09:17:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.490 09:17:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.490 09:17:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.490 09:17:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:14.490 09:17:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.749 09:17:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.749 09:17:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:14.749 09:17:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:14.749 09:17:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:14.749 09:17:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:14.749 09:17:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.749 09:17:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.749 [2024-10-15 09:17:32.435791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:14.749 [2024-10-15 09:17:32.435883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.749 [2024-10-15 09:17:32.435907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:14.749 [2024-10-15 09:17:32.435918] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.749 [2024-10-15 09:17:32.438258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.749 [2024-10-15 09:17:32.438300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:14.749 [2024-10-15 09:17:32.438370] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:14.749 [2024-10-15 09:17:32.438422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:14.749 [2024-10-15 09:17:32.438598] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:14.749 [2024-10-15 09:17:32.438619] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:14.749 [2024-10-15 09:17:32.438641] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:14.749 [2024-10-15 09:17:32.438721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:14.749 [2024-10-15 09:17:32.438800] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:14.749 [2024-10-15 09:17:32.438813] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:14.749 [2024-10-15 09:17:32.438899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:14.749 [2024-10-15 09:17:32.439021] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:14.749 [2024-10-15 09:17:32.439035] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:14.749 [2024-10-15 09:17:32.439155] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.749 pt1 00:18:14.749 09:17:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.749 09:17:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:14.749 09:17:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:14.749 09:17:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.749 09:17:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.749 09:17:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.749 09:17:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.749 09:17:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:14.749 09:17:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.750 09:17:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.750 09:17:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.750 09:17:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.750 09:17:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.750 09:17:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.750 09:17:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.750 09:17:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.750 09:17:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.750 09:17:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.750 "name": "raid_bdev1", 00:18:14.750 "uuid": "23d6087c-8614-476b-abbe-5da18b92a559", 00:18:14.750 "strip_size_kb": 0, 00:18:14.750 "state": "online", 00:18:14.750 "raid_level": "raid1", 00:18:14.750 "superblock": true, 00:18:14.750 "num_base_bdevs": 2, 00:18:14.750 "num_base_bdevs_discovered": 1, 00:18:14.750 "num_base_bdevs_operational": 1, 00:18:14.750 "base_bdevs_list": [ 00:18:14.750 { 00:18:14.750 "name": null, 00:18:14.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.750 "is_configured": false, 00:18:14.750 "data_offset": 256, 00:18:14.750 "data_size": 7936 00:18:14.750 }, 00:18:14.750 { 00:18:14.750 "name": "pt2", 00:18:14.750 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:14.750 "is_configured": true, 00:18:14.750 "data_offset": 256, 00:18:14.750 "data_size": 7936 00:18:14.750 } 00:18:14.750 ] 00:18:14.750 }' 00:18:14.750 09:17:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.750 09:17:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.008 09:17:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:15.008 09:17:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.008 09:17:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:15.008 09:17:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.008 09:17:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.267 09:17:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:15.267 09:17:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:15.267 09:17:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:15.267 09:17:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.267 09:17:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.267 [2024-10-15 09:17:32.943188] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:15.267 09:17:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.267 09:17:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 23d6087c-8614-476b-abbe-5da18b92a559 '!=' 23d6087c-8614-476b-abbe-5da18b92a559 ']' 00:18:15.267 09:17:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87771 00:18:15.267 09:17:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 87771 ']' 00:18:15.267 09:17:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 87771 00:18:15.267 09:17:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:18:15.267 09:17:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:15.267 09:17:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87771 00:18:15.267 09:17:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:15.267 09:17:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:15.267 killing process with pid 87771 00:18:15.268 09:17:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87771' 00:18:15.268 09:17:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 87771 00:18:15.268 09:17:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 87771 00:18:15.268 [2024-10-15 09:17:32.991481] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:15.268 [2024-10-15 09:17:32.991594] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:15.268 [2024-10-15 09:17:32.991655] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:15.268 [2024-10-15 09:17:32.991671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:15.526 [2024-10-15 09:17:33.248792] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:16.928 09:17:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:18:16.928 00:18:16.928 real 0m6.399s 00:18:16.928 user 0m9.677s 00:18:16.928 sys 0m1.135s 00:18:16.928 09:17:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:16.928 09:17:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.928 ************************************ 00:18:16.928 END TEST raid_superblock_test_md_separate 00:18:16.928 ************************************ 00:18:16.928 09:17:34 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:18:16.928 09:17:34 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:18:16.928 09:17:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:16.928 09:17:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:16.928 09:17:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:16.928 ************************************ 00:18:16.928 START TEST raid_rebuild_test_sb_md_separate 00:18:16.928 ************************************ 00:18:16.928 09:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:18:16.928 09:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:16.928 09:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:16.928 09:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:16.928 09:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:16.928 09:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:16.928 09:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:16.928 09:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:16.928 09:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:16.928 09:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:16.928 09:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:16.928 09:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:16.928 09:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:16.928 09:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:16.928 09:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:16.928 09:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:16.928 09:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:16.928 09:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:16.928 09:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:16.928 09:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:16.928 09:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:16.928 09:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:16.928 09:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:16.928 09:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:16.928 09:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:16.928 09:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88099 00:18:16.928 09:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:16.928 09:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88099 00:18:16.928 09:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 88099 ']' 00:18:16.928 09:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.928 09:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:16.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.928 09:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.928 09:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:16.928 09:17:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.928 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:16.928 Zero copy mechanism will not be used. 00:18:16.928 [2024-10-15 09:17:34.648349] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:18:16.928 [2024-10-15 09:17:34.648471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88099 ] 00:18:16.928 [2024-10-15 09:17:34.796928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.187 [2024-10-15 09:17:34.917026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.447 [2024-10-15 09:17:35.130280] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:17.447 [2024-10-15 09:17:35.130321] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:17.706 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:17.706 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:18:17.706 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:17.706 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:18:17.706 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.706 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.706 BaseBdev1_malloc 00:18:17.706 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.706 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:17.706 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.706 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.706 [2024-10-15 09:17:35.563138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:17.706 [2024-10-15 09:17:35.563207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.706 [2024-10-15 09:17:35.563236] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:17.706 [2024-10-15 09:17:35.563249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.706 [2024-10-15 09:17:35.565384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.706 [2024-10-15 09:17:35.565422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:17.706 BaseBdev1 00:18:17.706 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.706 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:17.706 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:18:17.706 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.706 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.964 BaseBdev2_malloc 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.964 [2024-10-15 09:17:35.622613] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:17.964 [2024-10-15 09:17:35.622703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.964 [2024-10-15 09:17:35.622732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:17.964 [2024-10-15 09:17:35.622745] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.964 [2024-10-15 09:17:35.625006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.964 [2024-10-15 09:17:35.625053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:17.964 BaseBdev2 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.964 spare_malloc 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.964 spare_delay 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.964 [2024-10-15 09:17:35.707887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:17.964 [2024-10-15 09:17:35.707976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.964 [2024-10-15 09:17:35.708008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:17.964 [2024-10-15 09:17:35.708019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.964 [2024-10-15 09:17:35.710007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.964 [2024-10-15 09:17:35.710048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:17.964 spare 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.964 [2024-10-15 09:17:35.719904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:17.964 [2024-10-15 09:17:35.721967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:17.964 [2024-10-15 09:17:35.722181] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:17.964 [2024-10-15 09:17:35.722199] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:17.964 [2024-10-15 09:17:35.722294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:17.964 [2024-10-15 09:17:35.722443] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:17.964 [2024-10-15 09:17:35.722459] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:17.964 [2024-10-15 09:17:35.722586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.964 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.964 "name": "raid_bdev1", 00:18:17.964 "uuid": "27120f91-0922-4dd6-a14f-1dfe0d7bd8cf", 00:18:17.964 "strip_size_kb": 0, 00:18:17.964 "state": "online", 00:18:17.964 "raid_level": "raid1", 00:18:17.964 "superblock": true, 00:18:17.964 "num_base_bdevs": 2, 00:18:17.964 "num_base_bdevs_discovered": 2, 00:18:17.964 "num_base_bdevs_operational": 2, 00:18:17.964 "base_bdevs_list": [ 00:18:17.964 { 00:18:17.964 "name": "BaseBdev1", 00:18:17.964 "uuid": "e107141e-5b8d-5c83-b9b3-2b23e79511e3", 00:18:17.964 "is_configured": true, 00:18:17.964 "data_offset": 256, 00:18:17.964 "data_size": 7936 00:18:17.964 }, 00:18:17.964 { 00:18:17.964 "name": "BaseBdev2", 00:18:17.964 "uuid": "8e3f7c3a-c9d9-548e-9faf-8b7b470e71f0", 00:18:17.965 "is_configured": true, 00:18:17.965 "data_offset": 256, 00:18:17.965 "data_size": 7936 00:18:17.965 } 00:18:17.965 ] 00:18:17.965 }' 00:18:17.965 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.965 09:17:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.529 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:18.529 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:18.529 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.529 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.529 [2024-10-15 09:17:36.199400] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:18.529 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.529 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:18.529 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.529 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.529 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.529 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:18.529 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.529 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:18.529 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:18.529 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:18.529 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:18.529 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:18.529 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:18.529 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:18.529 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:18.529 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:18.529 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:18.529 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:18.529 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:18.529 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:18.529 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:18.787 [2024-10-15 09:17:36.510747] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:18.787 /dev/nbd0 00:18:18.787 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:18.787 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:18.787 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:18.787 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:18:18.787 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:18.787 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:18.787 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:18.788 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:18:18.788 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:18.788 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:18.788 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:18.788 1+0 records in 00:18:18.788 1+0 records out 00:18:18.788 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422329 s, 9.7 MB/s 00:18:18.788 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.788 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:18:18.788 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.788 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:18.788 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:18:18.788 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:18.788 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:18.788 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:18.788 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:18.788 09:17:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:19.393 7936+0 records in 00:18:19.393 7936+0 records out 00:18:19.393 32505856 bytes (33 MB, 31 MiB) copied, 0.660569 s, 49.2 MB/s 00:18:19.393 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:19.393 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:19.393 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:19.393 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:19.393 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:19.393 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:19.393 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:19.651 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:19.651 [2024-10-15 09:17:37.492942] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.651 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:19.651 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:19.651 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:19.651 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:19.651 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:19.651 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:19.651 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:19.651 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:19.651 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.651 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.651 [2024-10-15 09:17:37.509041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:19.651 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.651 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:19.651 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.651 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.651 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.652 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.652 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:19.652 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.652 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.652 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.652 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.652 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.652 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.652 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.652 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.652 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.909 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.909 "name": "raid_bdev1", 00:18:19.909 "uuid": "27120f91-0922-4dd6-a14f-1dfe0d7bd8cf", 00:18:19.909 "strip_size_kb": 0, 00:18:19.909 "state": "online", 00:18:19.909 "raid_level": "raid1", 00:18:19.909 "superblock": true, 00:18:19.909 "num_base_bdevs": 2, 00:18:19.909 "num_base_bdevs_discovered": 1, 00:18:19.909 "num_base_bdevs_operational": 1, 00:18:19.909 "base_bdevs_list": [ 00:18:19.909 { 00:18:19.909 "name": null, 00:18:19.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.909 "is_configured": false, 00:18:19.909 "data_offset": 0, 00:18:19.909 "data_size": 7936 00:18:19.909 }, 00:18:19.909 { 00:18:19.909 "name": "BaseBdev2", 00:18:19.909 "uuid": "8e3f7c3a-c9d9-548e-9faf-8b7b470e71f0", 00:18:19.909 "is_configured": true, 00:18:19.909 "data_offset": 256, 00:18:19.909 "data_size": 7936 00:18:19.909 } 00:18:19.909 ] 00:18:19.909 }' 00:18:19.909 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.909 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.167 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:20.167 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.167 09:17:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.167 [2024-10-15 09:17:37.992245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:20.167 [2024-10-15 09:17:38.007367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:20.167 09:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.167 09:17:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:20.167 [2024-10-15 09:17:38.009265] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:21.539 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:21.539 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.539 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:21.539 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:21.539 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.539 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.539 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.539 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.539 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.539 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.539 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.539 "name": "raid_bdev1", 00:18:21.539 "uuid": "27120f91-0922-4dd6-a14f-1dfe0d7bd8cf", 00:18:21.539 "strip_size_kb": 0, 00:18:21.539 "state": "online", 00:18:21.539 "raid_level": "raid1", 00:18:21.539 "superblock": true, 00:18:21.539 "num_base_bdevs": 2, 00:18:21.539 "num_base_bdevs_discovered": 2, 00:18:21.539 "num_base_bdevs_operational": 2, 00:18:21.539 "process": { 00:18:21.539 "type": "rebuild", 00:18:21.539 "target": "spare", 00:18:21.539 "progress": { 00:18:21.540 "blocks": 2560, 00:18:21.540 "percent": 32 00:18:21.540 } 00:18:21.540 }, 00:18:21.540 "base_bdevs_list": [ 00:18:21.540 { 00:18:21.540 "name": "spare", 00:18:21.540 "uuid": "daecd14c-9df9-5d98-bc2f-39a49bf8cb95", 00:18:21.540 "is_configured": true, 00:18:21.540 "data_offset": 256, 00:18:21.540 "data_size": 7936 00:18:21.540 }, 00:18:21.540 { 00:18:21.540 "name": "BaseBdev2", 00:18:21.540 "uuid": "8e3f7c3a-c9d9-548e-9faf-8b7b470e71f0", 00:18:21.540 "is_configured": true, 00:18:21.540 "data_offset": 256, 00:18:21.540 "data_size": 7936 00:18:21.540 } 00:18:21.540 ] 00:18:21.540 }' 00:18:21.540 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.540 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:21.540 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.540 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:21.540 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:21.540 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.540 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.540 [2024-10-15 09:17:39.173722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:21.540 [2024-10-15 09:17:39.215439] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:21.540 [2024-10-15 09:17:39.215525] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.540 [2024-10-15 09:17:39.215542] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:21.540 [2024-10-15 09:17:39.215556] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:21.540 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.540 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:21.540 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:21.540 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.540 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.540 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.540 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:21.540 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.540 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.540 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.540 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.540 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.540 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.540 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.540 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.540 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.540 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.540 "name": "raid_bdev1", 00:18:21.540 "uuid": "27120f91-0922-4dd6-a14f-1dfe0d7bd8cf", 00:18:21.540 "strip_size_kb": 0, 00:18:21.540 "state": "online", 00:18:21.540 "raid_level": "raid1", 00:18:21.540 "superblock": true, 00:18:21.540 "num_base_bdevs": 2, 00:18:21.540 "num_base_bdevs_discovered": 1, 00:18:21.540 "num_base_bdevs_operational": 1, 00:18:21.540 "base_bdevs_list": [ 00:18:21.540 { 00:18:21.540 "name": null, 00:18:21.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.540 "is_configured": false, 00:18:21.540 "data_offset": 0, 00:18:21.540 "data_size": 7936 00:18:21.540 }, 00:18:21.540 { 00:18:21.540 "name": "BaseBdev2", 00:18:21.540 "uuid": "8e3f7c3a-c9d9-548e-9faf-8b7b470e71f0", 00:18:21.540 "is_configured": true, 00:18:21.540 "data_offset": 256, 00:18:21.540 "data_size": 7936 00:18:21.540 } 00:18:21.540 ] 00:18:21.540 }' 00:18:21.540 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.540 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.799 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:21.799 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.799 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:21.799 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:21.799 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.799 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.799 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.799 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.799 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.083 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.083 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.083 "name": "raid_bdev1", 00:18:22.083 "uuid": "27120f91-0922-4dd6-a14f-1dfe0d7bd8cf", 00:18:22.083 "strip_size_kb": 0, 00:18:22.083 "state": "online", 00:18:22.083 "raid_level": "raid1", 00:18:22.083 "superblock": true, 00:18:22.083 "num_base_bdevs": 2, 00:18:22.083 "num_base_bdevs_discovered": 1, 00:18:22.083 "num_base_bdevs_operational": 1, 00:18:22.083 "base_bdevs_list": [ 00:18:22.083 { 00:18:22.083 "name": null, 00:18:22.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.083 "is_configured": false, 00:18:22.083 "data_offset": 0, 00:18:22.083 "data_size": 7936 00:18:22.083 }, 00:18:22.083 { 00:18:22.083 "name": "BaseBdev2", 00:18:22.083 "uuid": "8e3f7c3a-c9d9-548e-9faf-8b7b470e71f0", 00:18:22.083 "is_configured": true, 00:18:22.083 "data_offset": 256, 00:18:22.083 "data_size": 7936 00:18:22.083 } 00:18:22.083 ] 00:18:22.083 }' 00:18:22.083 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.083 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:22.083 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.083 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:22.083 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:22.083 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.083 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.083 [2024-10-15 09:17:39.845486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:22.083 [2024-10-15 09:17:39.861316] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:22.083 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.083 09:17:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:22.083 [2024-10-15 09:17:39.863375] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:23.019 09:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:23.019 09:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.019 09:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:23.019 09:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:23.019 09:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.019 09:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.019 09:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.019 09:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.019 09:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.019 09:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.278 09:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.278 "name": "raid_bdev1", 00:18:23.278 "uuid": "27120f91-0922-4dd6-a14f-1dfe0d7bd8cf", 00:18:23.278 "strip_size_kb": 0, 00:18:23.278 "state": "online", 00:18:23.278 "raid_level": "raid1", 00:18:23.278 "superblock": true, 00:18:23.278 "num_base_bdevs": 2, 00:18:23.278 "num_base_bdevs_discovered": 2, 00:18:23.278 "num_base_bdevs_operational": 2, 00:18:23.278 "process": { 00:18:23.278 "type": "rebuild", 00:18:23.278 "target": "spare", 00:18:23.278 "progress": { 00:18:23.278 "blocks": 2560, 00:18:23.278 "percent": 32 00:18:23.278 } 00:18:23.278 }, 00:18:23.278 "base_bdevs_list": [ 00:18:23.278 { 00:18:23.278 "name": "spare", 00:18:23.278 "uuid": "daecd14c-9df9-5d98-bc2f-39a49bf8cb95", 00:18:23.278 "is_configured": true, 00:18:23.278 "data_offset": 256, 00:18:23.278 "data_size": 7936 00:18:23.278 }, 00:18:23.278 { 00:18:23.278 "name": "BaseBdev2", 00:18:23.278 "uuid": "8e3f7c3a-c9d9-548e-9faf-8b7b470e71f0", 00:18:23.278 "is_configured": true, 00:18:23.278 "data_offset": 256, 00:18:23.278 "data_size": 7936 00:18:23.278 } 00:18:23.278 ] 00:18:23.278 }' 00:18:23.278 09:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.278 09:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:23.278 09:17:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.278 09:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:23.278 09:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:23.278 09:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:23.278 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:23.278 09:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:23.278 09:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:23.278 09:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:23.278 09:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=745 00:18:23.278 09:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:23.278 09:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:23.278 09:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.278 09:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:23.278 09:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:23.278 09:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.278 09:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.278 09:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.278 09:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.278 09:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.278 09:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.278 09:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.278 "name": "raid_bdev1", 00:18:23.278 "uuid": "27120f91-0922-4dd6-a14f-1dfe0d7bd8cf", 00:18:23.278 "strip_size_kb": 0, 00:18:23.278 "state": "online", 00:18:23.278 "raid_level": "raid1", 00:18:23.278 "superblock": true, 00:18:23.278 "num_base_bdevs": 2, 00:18:23.279 "num_base_bdevs_discovered": 2, 00:18:23.279 "num_base_bdevs_operational": 2, 00:18:23.279 "process": { 00:18:23.279 "type": "rebuild", 00:18:23.279 "target": "spare", 00:18:23.279 "progress": { 00:18:23.279 "blocks": 2816, 00:18:23.279 "percent": 35 00:18:23.279 } 00:18:23.279 }, 00:18:23.279 "base_bdevs_list": [ 00:18:23.279 { 00:18:23.279 "name": "spare", 00:18:23.279 "uuid": "daecd14c-9df9-5d98-bc2f-39a49bf8cb95", 00:18:23.279 "is_configured": true, 00:18:23.279 "data_offset": 256, 00:18:23.279 "data_size": 7936 00:18:23.279 }, 00:18:23.279 { 00:18:23.279 "name": "BaseBdev2", 00:18:23.279 "uuid": "8e3f7c3a-c9d9-548e-9faf-8b7b470e71f0", 00:18:23.279 "is_configured": true, 00:18:23.279 "data_offset": 256, 00:18:23.279 "data_size": 7936 00:18:23.279 } 00:18:23.279 ] 00:18:23.279 }' 00:18:23.279 09:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.279 09:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:23.279 09:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.279 09:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:23.279 09:17:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:24.718 09:17:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:24.718 09:17:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:24.718 09:17:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.718 09:17:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:24.718 09:17:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:24.718 09:17:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.718 09:17:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.718 09:17:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.718 09:17:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.718 09:17:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.718 09:17:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.718 09:17:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.718 "name": "raid_bdev1", 00:18:24.718 "uuid": "27120f91-0922-4dd6-a14f-1dfe0d7bd8cf", 00:18:24.718 "strip_size_kb": 0, 00:18:24.718 "state": "online", 00:18:24.718 "raid_level": "raid1", 00:18:24.718 "superblock": true, 00:18:24.718 "num_base_bdevs": 2, 00:18:24.718 "num_base_bdevs_discovered": 2, 00:18:24.718 "num_base_bdevs_operational": 2, 00:18:24.718 "process": { 00:18:24.718 "type": "rebuild", 00:18:24.718 "target": "spare", 00:18:24.719 "progress": { 00:18:24.719 "blocks": 5632, 00:18:24.719 "percent": 70 00:18:24.719 } 00:18:24.719 }, 00:18:24.719 "base_bdevs_list": [ 00:18:24.719 { 00:18:24.719 "name": "spare", 00:18:24.719 "uuid": "daecd14c-9df9-5d98-bc2f-39a49bf8cb95", 00:18:24.719 "is_configured": true, 00:18:24.719 "data_offset": 256, 00:18:24.719 "data_size": 7936 00:18:24.719 }, 00:18:24.719 { 00:18:24.719 "name": "BaseBdev2", 00:18:24.719 "uuid": "8e3f7c3a-c9d9-548e-9faf-8b7b470e71f0", 00:18:24.719 "is_configured": true, 00:18:24.719 "data_offset": 256, 00:18:24.719 "data_size": 7936 00:18:24.719 } 00:18:24.719 ] 00:18:24.719 }' 00:18:24.719 09:17:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.719 09:17:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:24.719 09:17:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.719 09:17:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:24.719 09:17:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:25.307 [2024-10-15 09:17:42.979325] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:25.307 [2024-10-15 09:17:42.979436] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:25.307 [2024-10-15 09:17:42.979567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.567 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:25.567 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:25.567 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:25.567 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:25.567 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:25.567 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:25.567 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.567 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.567 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.567 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.567 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.567 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.567 "name": "raid_bdev1", 00:18:25.567 "uuid": "27120f91-0922-4dd6-a14f-1dfe0d7bd8cf", 00:18:25.567 "strip_size_kb": 0, 00:18:25.567 "state": "online", 00:18:25.567 "raid_level": "raid1", 00:18:25.567 "superblock": true, 00:18:25.567 "num_base_bdevs": 2, 00:18:25.567 "num_base_bdevs_discovered": 2, 00:18:25.567 "num_base_bdevs_operational": 2, 00:18:25.567 "base_bdevs_list": [ 00:18:25.567 { 00:18:25.567 "name": "spare", 00:18:25.567 "uuid": "daecd14c-9df9-5d98-bc2f-39a49bf8cb95", 00:18:25.567 "is_configured": true, 00:18:25.567 "data_offset": 256, 00:18:25.567 "data_size": 7936 00:18:25.567 }, 00:18:25.567 { 00:18:25.567 "name": "BaseBdev2", 00:18:25.567 "uuid": "8e3f7c3a-c9d9-548e-9faf-8b7b470e71f0", 00:18:25.567 "is_configured": true, 00:18:25.567 "data_offset": 256, 00:18:25.567 "data_size": 7936 00:18:25.567 } 00:18:25.567 ] 00:18:25.567 }' 00:18:25.567 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:25.567 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:25.567 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:25.567 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:25.567 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:18:25.567 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:25.567 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:25.567 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:25.567 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:25.567 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:25.567 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.567 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.567 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.567 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.567 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.567 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.567 "name": "raid_bdev1", 00:18:25.567 "uuid": "27120f91-0922-4dd6-a14f-1dfe0d7bd8cf", 00:18:25.567 "strip_size_kb": 0, 00:18:25.567 "state": "online", 00:18:25.567 "raid_level": "raid1", 00:18:25.567 "superblock": true, 00:18:25.567 "num_base_bdevs": 2, 00:18:25.567 "num_base_bdevs_discovered": 2, 00:18:25.567 "num_base_bdevs_operational": 2, 00:18:25.567 "base_bdevs_list": [ 00:18:25.567 { 00:18:25.567 "name": "spare", 00:18:25.567 "uuid": "daecd14c-9df9-5d98-bc2f-39a49bf8cb95", 00:18:25.567 "is_configured": true, 00:18:25.567 "data_offset": 256, 00:18:25.567 "data_size": 7936 00:18:25.567 }, 00:18:25.567 { 00:18:25.567 "name": "BaseBdev2", 00:18:25.567 "uuid": "8e3f7c3a-c9d9-548e-9faf-8b7b470e71f0", 00:18:25.567 "is_configured": true, 00:18:25.567 "data_offset": 256, 00:18:25.567 "data_size": 7936 00:18:25.567 } 00:18:25.567 ] 00:18:25.567 }' 00:18:25.567 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:25.826 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:25.826 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:25.826 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:25.826 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:25.826 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:25.826 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.826 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.826 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.826 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:25.826 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.826 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.826 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.826 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.826 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.826 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.826 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.826 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.826 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.826 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.826 "name": "raid_bdev1", 00:18:25.826 "uuid": "27120f91-0922-4dd6-a14f-1dfe0d7bd8cf", 00:18:25.826 "strip_size_kb": 0, 00:18:25.826 "state": "online", 00:18:25.826 "raid_level": "raid1", 00:18:25.826 "superblock": true, 00:18:25.826 "num_base_bdevs": 2, 00:18:25.826 "num_base_bdevs_discovered": 2, 00:18:25.826 "num_base_bdevs_operational": 2, 00:18:25.826 "base_bdevs_list": [ 00:18:25.826 { 00:18:25.826 "name": "spare", 00:18:25.826 "uuid": "daecd14c-9df9-5d98-bc2f-39a49bf8cb95", 00:18:25.826 "is_configured": true, 00:18:25.826 "data_offset": 256, 00:18:25.826 "data_size": 7936 00:18:25.826 }, 00:18:25.826 { 00:18:25.826 "name": "BaseBdev2", 00:18:25.826 "uuid": "8e3f7c3a-c9d9-548e-9faf-8b7b470e71f0", 00:18:25.826 "is_configured": true, 00:18:25.826 "data_offset": 256, 00:18:25.826 "data_size": 7936 00:18:25.826 } 00:18:25.826 ] 00:18:25.826 }' 00:18:25.826 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.826 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.393 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:26.393 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.393 09:17:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.393 [2024-10-15 09:17:43.999936] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:26.393 [2024-10-15 09:17:43.999976] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:26.393 [2024-10-15 09:17:44.000080] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:26.393 [2024-10-15 09:17:44.000163] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:26.393 [2024-10-15 09:17:44.000184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:26.393 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.393 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.393 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.393 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.393 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:18:26.393 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.393 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:26.393 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:26.393 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:26.393 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:26.393 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:26.393 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:26.393 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:26.393 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:26.393 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:26.393 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:26.393 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:26.393 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:26.393 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:26.393 /dev/nbd0 00:18:26.652 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:26.652 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:26.652 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:26.652 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:18:26.652 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:26.652 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:26.652 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:26.652 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:18:26.652 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:26.652 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:26.652 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:26.652 1+0 records in 00:18:26.652 1+0 records out 00:18:26.652 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411588 s, 10.0 MB/s 00:18:26.652 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:26.652 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:18:26.652 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:26.652 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:26.652 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:18:26.652 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:26.652 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:26.652 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:26.912 /dev/nbd1 00:18:26.912 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:26.912 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:26.912 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:18:26.912 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:18:26.912 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:26.912 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:26.912 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:18:26.912 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:18:26.912 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:26.912 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:26.912 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:26.912 1+0 records in 00:18:26.912 1+0 records out 00:18:26.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294638 s, 13.9 MB/s 00:18:26.912 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:26.912 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:18:26.912 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:26.912 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:26.912 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:18:26.912 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:26.913 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:26.913 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:27.173 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:27.173 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:27.173 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:27.173 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:27.173 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:27.173 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:27.173 09:17:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:27.173 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:27.434 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:27.434 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:27.434 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:27.434 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:27.434 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:27.434 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:27.434 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:27.434 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:27.434 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:27.694 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:27.694 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:27.694 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:27.694 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:27.694 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:27.694 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:27.694 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:27.694 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:27.694 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:27.694 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:27.694 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.694 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.694 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.694 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:27.694 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.694 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.694 [2024-10-15 09:17:45.410879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:27.694 [2024-10-15 09:17:45.410947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.694 [2024-10-15 09:17:45.410974] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:27.694 [2024-10-15 09:17:45.410986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.694 [2024-10-15 09:17:45.413293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.694 [2024-10-15 09:17:45.413343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:27.694 [2024-10-15 09:17:45.413424] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:27.694 [2024-10-15 09:17:45.413495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:27.694 [2024-10-15 09:17:45.413697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:27.694 spare 00:18:27.694 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.694 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:27.694 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.694 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.694 [2024-10-15 09:17:45.513626] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:27.694 [2024-10-15 09:17:45.513719] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:27.695 [2024-10-15 09:17:45.513887] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:27.695 [2024-10-15 09:17:45.514088] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:27.695 [2024-10-15 09:17:45.514106] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:27.695 [2024-10-15 09:17:45.514278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.695 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.695 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:27.695 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.695 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.695 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.695 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.695 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:27.695 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.695 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.695 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.695 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.695 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.695 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.695 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.695 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.695 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.695 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.695 "name": "raid_bdev1", 00:18:27.695 "uuid": "27120f91-0922-4dd6-a14f-1dfe0d7bd8cf", 00:18:27.695 "strip_size_kb": 0, 00:18:27.695 "state": "online", 00:18:27.695 "raid_level": "raid1", 00:18:27.695 "superblock": true, 00:18:27.695 "num_base_bdevs": 2, 00:18:27.695 "num_base_bdevs_discovered": 2, 00:18:27.695 "num_base_bdevs_operational": 2, 00:18:27.695 "base_bdevs_list": [ 00:18:27.695 { 00:18:27.695 "name": "spare", 00:18:27.695 "uuid": "daecd14c-9df9-5d98-bc2f-39a49bf8cb95", 00:18:27.695 "is_configured": true, 00:18:27.695 "data_offset": 256, 00:18:27.695 "data_size": 7936 00:18:27.695 }, 00:18:27.695 { 00:18:27.695 "name": "BaseBdev2", 00:18:27.695 "uuid": "8e3f7c3a-c9d9-548e-9faf-8b7b470e71f0", 00:18:27.695 "is_configured": true, 00:18:27.695 "data_offset": 256, 00:18:27.695 "data_size": 7936 00:18:27.695 } 00:18:27.695 ] 00:18:27.695 }' 00:18:27.695 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.695 09:17:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.273 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:28.273 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.273 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:28.273 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:28.273 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.273 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.273 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.273 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.273 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.273 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.273 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.273 "name": "raid_bdev1", 00:18:28.274 "uuid": "27120f91-0922-4dd6-a14f-1dfe0d7bd8cf", 00:18:28.274 "strip_size_kb": 0, 00:18:28.274 "state": "online", 00:18:28.274 "raid_level": "raid1", 00:18:28.274 "superblock": true, 00:18:28.274 "num_base_bdevs": 2, 00:18:28.274 "num_base_bdevs_discovered": 2, 00:18:28.274 "num_base_bdevs_operational": 2, 00:18:28.274 "base_bdevs_list": [ 00:18:28.274 { 00:18:28.274 "name": "spare", 00:18:28.274 "uuid": "daecd14c-9df9-5d98-bc2f-39a49bf8cb95", 00:18:28.274 "is_configured": true, 00:18:28.274 "data_offset": 256, 00:18:28.274 "data_size": 7936 00:18:28.274 }, 00:18:28.274 { 00:18:28.274 "name": "BaseBdev2", 00:18:28.274 "uuid": "8e3f7c3a-c9d9-548e-9faf-8b7b470e71f0", 00:18:28.274 "is_configured": true, 00:18:28.274 "data_offset": 256, 00:18:28.274 "data_size": 7936 00:18:28.274 } 00:18:28.274 ] 00:18:28.274 }' 00:18:28.274 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.274 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:28.274 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.274 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:28.274 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:28.274 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.274 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.274 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.274 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.274 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:28.274 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:28.274 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.274 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.544 [2024-10-15 09:17:46.169802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:28.544 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.544 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:28.544 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.544 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.544 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:28.544 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:28.544 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:28.545 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.545 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.545 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.545 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.545 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.545 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.545 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.545 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.545 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.545 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.545 "name": "raid_bdev1", 00:18:28.545 "uuid": "27120f91-0922-4dd6-a14f-1dfe0d7bd8cf", 00:18:28.545 "strip_size_kb": 0, 00:18:28.545 "state": "online", 00:18:28.545 "raid_level": "raid1", 00:18:28.545 "superblock": true, 00:18:28.545 "num_base_bdevs": 2, 00:18:28.545 "num_base_bdevs_discovered": 1, 00:18:28.545 "num_base_bdevs_operational": 1, 00:18:28.545 "base_bdevs_list": [ 00:18:28.545 { 00:18:28.545 "name": null, 00:18:28.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.545 "is_configured": false, 00:18:28.545 "data_offset": 0, 00:18:28.545 "data_size": 7936 00:18:28.545 }, 00:18:28.545 { 00:18:28.545 "name": "BaseBdev2", 00:18:28.545 "uuid": "8e3f7c3a-c9d9-548e-9faf-8b7b470e71f0", 00:18:28.545 "is_configured": true, 00:18:28.545 "data_offset": 256, 00:18:28.545 "data_size": 7936 00:18:28.545 } 00:18:28.545 ] 00:18:28.545 }' 00:18:28.545 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.545 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.804 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:28.804 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.804 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.804 [2024-10-15 09:17:46.665012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:28.804 [2024-10-15 09:17:46.665261] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:28.804 [2024-10-15 09:17:46.665284] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:28.804 [2024-10-15 09:17:46.665317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:28.804 [2024-10-15 09:17:46.682227] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:28.804 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.804 09:17:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:28.804 [2024-10-15 09:17:46.684384] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.185 "name": "raid_bdev1", 00:18:30.185 "uuid": "27120f91-0922-4dd6-a14f-1dfe0d7bd8cf", 00:18:30.185 "strip_size_kb": 0, 00:18:30.185 "state": "online", 00:18:30.185 "raid_level": "raid1", 00:18:30.185 "superblock": true, 00:18:30.185 "num_base_bdevs": 2, 00:18:30.185 "num_base_bdevs_discovered": 2, 00:18:30.185 "num_base_bdevs_operational": 2, 00:18:30.185 "process": { 00:18:30.185 "type": "rebuild", 00:18:30.185 "target": "spare", 00:18:30.185 "progress": { 00:18:30.185 "blocks": 2560, 00:18:30.185 "percent": 32 00:18:30.185 } 00:18:30.185 }, 00:18:30.185 "base_bdevs_list": [ 00:18:30.185 { 00:18:30.185 "name": "spare", 00:18:30.185 "uuid": "daecd14c-9df9-5d98-bc2f-39a49bf8cb95", 00:18:30.185 "is_configured": true, 00:18:30.185 "data_offset": 256, 00:18:30.185 "data_size": 7936 00:18:30.185 }, 00:18:30.185 { 00:18:30.185 "name": "BaseBdev2", 00:18:30.185 "uuid": "8e3f7c3a-c9d9-548e-9faf-8b7b470e71f0", 00:18:30.185 "is_configured": true, 00:18:30.185 "data_offset": 256, 00:18:30.185 "data_size": 7936 00:18:30.185 } 00:18:30.185 ] 00:18:30.185 }' 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.185 [2024-10-15 09:17:47.847937] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:30.185 [2024-10-15 09:17:47.890962] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:30.185 [2024-10-15 09:17:47.891058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.185 [2024-10-15 09:17:47.891077] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:30.185 [2024-10-15 09:17:47.891105] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.185 "name": "raid_bdev1", 00:18:30.185 "uuid": "27120f91-0922-4dd6-a14f-1dfe0d7bd8cf", 00:18:30.185 "strip_size_kb": 0, 00:18:30.185 "state": "online", 00:18:30.185 "raid_level": "raid1", 00:18:30.185 "superblock": true, 00:18:30.185 "num_base_bdevs": 2, 00:18:30.185 "num_base_bdevs_discovered": 1, 00:18:30.185 "num_base_bdevs_operational": 1, 00:18:30.185 "base_bdevs_list": [ 00:18:30.185 { 00:18:30.185 "name": null, 00:18:30.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.185 "is_configured": false, 00:18:30.185 "data_offset": 0, 00:18:30.185 "data_size": 7936 00:18:30.185 }, 00:18:30.185 { 00:18:30.185 "name": "BaseBdev2", 00:18:30.185 "uuid": "8e3f7c3a-c9d9-548e-9faf-8b7b470e71f0", 00:18:30.185 "is_configured": true, 00:18:30.185 "data_offset": 256, 00:18:30.185 "data_size": 7936 00:18:30.185 } 00:18:30.185 ] 00:18:30.185 }' 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.185 09:17:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.752 09:17:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:30.752 09:17:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.752 09:17:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.752 [2024-10-15 09:17:48.386432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:30.752 [2024-10-15 09:17:48.386504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.752 [2024-10-15 09:17:48.386541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:30.752 [2024-10-15 09:17:48.386555] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.752 [2024-10-15 09:17:48.386863] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.752 [2024-10-15 09:17:48.386886] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:30.752 [2024-10-15 09:17:48.386954] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:30.753 [2024-10-15 09:17:48.386972] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:30.753 [2024-10-15 09:17:48.386983] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:30.753 [2024-10-15 09:17:48.387019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:30.753 [2024-10-15 09:17:48.404459] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:30.753 spare 00:18:30.753 09:17:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.753 09:17:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:30.753 [2024-10-15 09:17:48.406611] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:31.700 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:31.700 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.700 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:31.700 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:31.700 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.700 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.700 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.700 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.700 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.700 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.700 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.700 "name": "raid_bdev1", 00:18:31.700 "uuid": "27120f91-0922-4dd6-a14f-1dfe0d7bd8cf", 00:18:31.700 "strip_size_kb": 0, 00:18:31.700 "state": "online", 00:18:31.700 "raid_level": "raid1", 00:18:31.700 "superblock": true, 00:18:31.700 "num_base_bdevs": 2, 00:18:31.700 "num_base_bdevs_discovered": 2, 00:18:31.700 "num_base_bdevs_operational": 2, 00:18:31.700 "process": { 00:18:31.700 "type": "rebuild", 00:18:31.700 "target": "spare", 00:18:31.700 "progress": { 00:18:31.700 "blocks": 2560, 00:18:31.700 "percent": 32 00:18:31.700 } 00:18:31.700 }, 00:18:31.700 "base_bdevs_list": [ 00:18:31.700 { 00:18:31.700 "name": "spare", 00:18:31.700 "uuid": "daecd14c-9df9-5d98-bc2f-39a49bf8cb95", 00:18:31.700 "is_configured": true, 00:18:31.700 "data_offset": 256, 00:18:31.700 "data_size": 7936 00:18:31.700 }, 00:18:31.700 { 00:18:31.700 "name": "BaseBdev2", 00:18:31.700 "uuid": "8e3f7c3a-c9d9-548e-9faf-8b7b470e71f0", 00:18:31.700 "is_configured": true, 00:18:31.700 "data_offset": 256, 00:18:31.700 "data_size": 7936 00:18:31.700 } 00:18:31.700 ] 00:18:31.700 }' 00:18:31.700 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.700 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:31.700 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.700 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:31.700 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:31.700 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.700 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.700 [2024-10-15 09:17:49.554723] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:31.981 [2024-10-15 09:17:49.613078] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:31.981 [2024-10-15 09:17:49.613160] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.981 [2024-10-15 09:17:49.613183] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:31.981 [2024-10-15 09:17:49.613192] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:31.981 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.981 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:31.981 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.981 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.981 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.981 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.981 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:31.981 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.981 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.981 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.981 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.981 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.981 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.981 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.981 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.981 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.981 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.981 "name": "raid_bdev1", 00:18:31.981 "uuid": "27120f91-0922-4dd6-a14f-1dfe0d7bd8cf", 00:18:31.981 "strip_size_kb": 0, 00:18:31.981 "state": "online", 00:18:31.981 "raid_level": "raid1", 00:18:31.981 "superblock": true, 00:18:31.981 "num_base_bdevs": 2, 00:18:31.981 "num_base_bdevs_discovered": 1, 00:18:31.981 "num_base_bdevs_operational": 1, 00:18:31.981 "base_bdevs_list": [ 00:18:31.981 { 00:18:31.981 "name": null, 00:18:31.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.981 "is_configured": false, 00:18:31.981 "data_offset": 0, 00:18:31.981 "data_size": 7936 00:18:31.981 }, 00:18:31.981 { 00:18:31.981 "name": "BaseBdev2", 00:18:31.981 "uuid": "8e3f7c3a-c9d9-548e-9faf-8b7b470e71f0", 00:18:31.981 "is_configured": true, 00:18:31.981 "data_offset": 256, 00:18:31.981 "data_size": 7936 00:18:31.981 } 00:18:31.981 ] 00:18:31.981 }' 00:18:31.981 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.981 09:17:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.240 09:17:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:32.240 09:17:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.240 09:17:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:32.240 09:17:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:32.240 09:17:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.240 09:17:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.240 09:17:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.240 09:17:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.240 09:17:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.240 09:17:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.240 09:17:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.240 "name": "raid_bdev1", 00:18:32.240 "uuid": "27120f91-0922-4dd6-a14f-1dfe0d7bd8cf", 00:18:32.240 "strip_size_kb": 0, 00:18:32.240 "state": "online", 00:18:32.240 "raid_level": "raid1", 00:18:32.240 "superblock": true, 00:18:32.240 "num_base_bdevs": 2, 00:18:32.240 "num_base_bdevs_discovered": 1, 00:18:32.240 "num_base_bdevs_operational": 1, 00:18:32.240 "base_bdevs_list": [ 00:18:32.240 { 00:18:32.240 "name": null, 00:18:32.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.240 "is_configured": false, 00:18:32.240 "data_offset": 0, 00:18:32.240 "data_size": 7936 00:18:32.240 }, 00:18:32.240 { 00:18:32.240 "name": "BaseBdev2", 00:18:32.240 "uuid": "8e3f7c3a-c9d9-548e-9faf-8b7b470e71f0", 00:18:32.240 "is_configured": true, 00:18:32.240 "data_offset": 256, 00:18:32.240 "data_size": 7936 00:18:32.240 } 00:18:32.240 ] 00:18:32.240 }' 00:18:32.240 09:17:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.500 09:17:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:32.500 09:17:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.500 09:17:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:32.500 09:17:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:32.500 09:17:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.500 09:17:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.500 09:17:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.500 09:17:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:32.500 09:17:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.500 09:17:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.500 [2024-10-15 09:17:50.240319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:32.500 [2024-10-15 09:17:50.240379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:32.500 [2024-10-15 09:17:50.240407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:32.500 [2024-10-15 09:17:50.240417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:32.500 [2024-10-15 09:17:50.240656] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:32.500 [2024-10-15 09:17:50.240667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:32.500 [2024-10-15 09:17:50.240737] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:32.500 [2024-10-15 09:17:50.240755] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:32.500 [2024-10-15 09:17:50.240767] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:32.500 [2024-10-15 09:17:50.240781] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:32.500 BaseBdev1 00:18:32.500 09:17:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.500 09:17:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:33.440 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:33.440 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.440 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.440 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.440 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.440 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:33.440 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.440 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.440 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.440 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.440 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.440 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.440 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.440 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.440 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.440 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.440 "name": "raid_bdev1", 00:18:33.440 "uuid": "27120f91-0922-4dd6-a14f-1dfe0d7bd8cf", 00:18:33.440 "strip_size_kb": 0, 00:18:33.440 "state": "online", 00:18:33.440 "raid_level": "raid1", 00:18:33.440 "superblock": true, 00:18:33.440 "num_base_bdevs": 2, 00:18:33.440 "num_base_bdevs_discovered": 1, 00:18:33.440 "num_base_bdevs_operational": 1, 00:18:33.440 "base_bdevs_list": [ 00:18:33.440 { 00:18:33.440 "name": null, 00:18:33.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.440 "is_configured": false, 00:18:33.440 "data_offset": 0, 00:18:33.440 "data_size": 7936 00:18:33.440 }, 00:18:33.440 { 00:18:33.440 "name": "BaseBdev2", 00:18:33.440 "uuid": "8e3f7c3a-c9d9-548e-9faf-8b7b470e71f0", 00:18:33.440 "is_configured": true, 00:18:33.440 "data_offset": 256, 00:18:33.440 "data_size": 7936 00:18:33.440 } 00:18:33.440 ] 00:18:33.440 }' 00:18:33.440 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.440 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.009 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:34.009 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.009 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:34.009 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:34.009 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.009 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.009 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.009 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.009 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.009 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.009 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:34.009 "name": "raid_bdev1", 00:18:34.009 "uuid": "27120f91-0922-4dd6-a14f-1dfe0d7bd8cf", 00:18:34.009 "strip_size_kb": 0, 00:18:34.009 "state": "online", 00:18:34.009 "raid_level": "raid1", 00:18:34.009 "superblock": true, 00:18:34.009 "num_base_bdevs": 2, 00:18:34.009 "num_base_bdevs_discovered": 1, 00:18:34.009 "num_base_bdevs_operational": 1, 00:18:34.009 "base_bdevs_list": [ 00:18:34.009 { 00:18:34.009 "name": null, 00:18:34.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.009 "is_configured": false, 00:18:34.009 "data_offset": 0, 00:18:34.009 "data_size": 7936 00:18:34.009 }, 00:18:34.009 { 00:18:34.009 "name": "BaseBdev2", 00:18:34.009 "uuid": "8e3f7c3a-c9d9-548e-9faf-8b7b470e71f0", 00:18:34.009 "is_configured": true, 00:18:34.009 "data_offset": 256, 00:18:34.009 "data_size": 7936 00:18:34.009 } 00:18:34.009 ] 00:18:34.009 }' 00:18:34.009 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:34.009 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:34.009 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:34.009 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:34.009 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:34.009 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:18:34.009 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:34.009 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:34.009 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:34.009 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:34.009 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:34.009 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:34.009 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.009 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.009 [2024-10-15 09:17:51.829797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:34.009 [2024-10-15 09:17:51.829999] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:34.009 [2024-10-15 09:17:51.830021] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:34.009 request: 00:18:34.009 { 00:18:34.009 "base_bdev": "BaseBdev1", 00:18:34.009 "raid_bdev": "raid_bdev1", 00:18:34.009 "method": "bdev_raid_add_base_bdev", 00:18:34.009 "req_id": 1 00:18:34.009 } 00:18:34.009 Got JSON-RPC error response 00:18:34.009 response: 00:18:34.009 { 00:18:34.009 "code": -22, 00:18:34.009 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:34.009 } 00:18:34.009 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:34.009 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:18:34.009 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:34.009 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:34.009 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:34.009 09:17:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:35.388 09:17:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:35.388 09:17:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.388 09:17:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.388 09:17:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.388 09:17:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.388 09:17:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:35.388 09:17:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.388 09:17:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.388 09:17:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.388 09:17:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.388 09:17:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.388 09:17:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.388 09:17:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.388 09:17:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.388 09:17:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.388 09:17:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.388 "name": "raid_bdev1", 00:18:35.388 "uuid": "27120f91-0922-4dd6-a14f-1dfe0d7bd8cf", 00:18:35.388 "strip_size_kb": 0, 00:18:35.388 "state": "online", 00:18:35.388 "raid_level": "raid1", 00:18:35.388 "superblock": true, 00:18:35.388 "num_base_bdevs": 2, 00:18:35.388 "num_base_bdevs_discovered": 1, 00:18:35.388 "num_base_bdevs_operational": 1, 00:18:35.388 "base_bdevs_list": [ 00:18:35.388 { 00:18:35.388 "name": null, 00:18:35.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.388 "is_configured": false, 00:18:35.388 "data_offset": 0, 00:18:35.388 "data_size": 7936 00:18:35.388 }, 00:18:35.388 { 00:18:35.388 "name": "BaseBdev2", 00:18:35.388 "uuid": "8e3f7c3a-c9d9-548e-9faf-8b7b470e71f0", 00:18:35.388 "is_configured": true, 00:18:35.388 "data_offset": 256, 00:18:35.388 "data_size": 7936 00:18:35.388 } 00:18:35.388 ] 00:18:35.388 }' 00:18:35.388 09:17:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.388 09:17:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.648 09:17:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:35.648 09:17:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.648 09:17:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:35.648 09:17:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:35.648 09:17:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.648 09:17:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.648 09:17:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.648 09:17:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.648 09:17:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.648 09:17:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.648 09:17:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.648 "name": "raid_bdev1", 00:18:35.648 "uuid": "27120f91-0922-4dd6-a14f-1dfe0d7bd8cf", 00:18:35.648 "strip_size_kb": 0, 00:18:35.648 "state": "online", 00:18:35.648 "raid_level": "raid1", 00:18:35.648 "superblock": true, 00:18:35.648 "num_base_bdevs": 2, 00:18:35.648 "num_base_bdevs_discovered": 1, 00:18:35.648 "num_base_bdevs_operational": 1, 00:18:35.648 "base_bdevs_list": [ 00:18:35.648 { 00:18:35.648 "name": null, 00:18:35.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.648 "is_configured": false, 00:18:35.648 "data_offset": 0, 00:18:35.648 "data_size": 7936 00:18:35.648 }, 00:18:35.648 { 00:18:35.648 "name": "BaseBdev2", 00:18:35.648 "uuid": "8e3f7c3a-c9d9-548e-9faf-8b7b470e71f0", 00:18:35.648 "is_configured": true, 00:18:35.648 "data_offset": 256, 00:18:35.648 "data_size": 7936 00:18:35.648 } 00:18:35.648 ] 00:18:35.648 }' 00:18:35.648 09:17:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.648 09:17:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:35.648 09:17:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.648 09:17:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:35.648 09:17:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88099 00:18:35.648 09:17:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 88099 ']' 00:18:35.648 09:17:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 88099 00:18:35.648 09:17:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:18:35.648 09:17:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:35.648 09:17:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88099 00:18:35.648 09:17:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:35.648 09:17:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:35.648 killing process with pid 88099 00:18:35.648 09:17:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88099' 00:18:35.648 09:17:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 88099 00:18:35.648 Received shutdown signal, test time was about 60.000000 seconds 00:18:35.648 00:18:35.648 Latency(us) 00:18:35.648 [2024-10-15T09:17:53.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.648 [2024-10-15T09:17:53.544Z] =================================================================================================================== 00:18:35.648 [2024-10-15T09:17:53.544Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:35.648 [2024-10-15 09:17:53.466876] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:35.648 09:17:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 88099 00:18:35.648 [2024-10-15 09:17:53.467038] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:35.648 [2024-10-15 09:17:53.467095] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:35.648 [2024-10-15 09:17:53.467109] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:36.216 [2024-10-15 09:17:53.858370] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:37.594 09:17:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:18:37.594 00:18:37.594 real 0m20.644s 00:18:37.594 user 0m27.074s 00:18:37.594 sys 0m2.794s 00:18:37.594 09:17:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:37.594 09:17:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.594 ************************************ 00:18:37.594 END TEST raid_rebuild_test_sb_md_separate 00:18:37.594 ************************************ 00:18:37.594 09:17:55 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:18:37.594 09:17:55 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:18:37.594 09:17:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:37.594 09:17:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:37.594 09:17:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:37.594 ************************************ 00:18:37.594 START TEST raid_state_function_test_sb_md_interleaved 00:18:37.594 ************************************ 00:18:37.594 09:17:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:18:37.594 09:17:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:37.594 09:17:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:37.594 09:17:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:37.594 09:17:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:37.594 09:17:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:37.594 09:17:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:37.594 09:17:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:37.594 09:17:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:37.595 09:17:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:37.595 09:17:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:37.595 09:17:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:37.595 09:17:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:37.595 09:17:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:37.595 09:17:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:37.595 09:17:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:37.595 09:17:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:37.595 09:17:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:37.595 09:17:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:37.595 09:17:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:37.595 09:17:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:37.595 09:17:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:37.595 09:17:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:37.595 09:17:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88796 00:18:37.595 09:17:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:37.595 Process raid pid: 88796 00:18:37.595 09:17:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88796' 00:18:37.595 09:17:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88796 00:18:37.595 09:17:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 88796 ']' 00:18:37.595 09:17:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.595 09:17:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:37.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.595 09:17:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.595 09:17:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:37.595 09:17:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.595 [2024-10-15 09:17:55.369014] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:18:37.595 [2024-10-15 09:17:55.369137] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:37.855 [2024-10-15 09:17:55.539774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.855 [2024-10-15 09:17:55.677913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.115 [2024-10-15 09:17:55.922956] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:38.115 [2024-10-15 09:17:55.923007] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:38.374 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:38.374 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:18:38.374 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:38.374 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.374 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.374 [2024-10-15 09:17:56.269326] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:38.374 [2024-10-15 09:17:56.269383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:38.374 [2024-10-15 09:17:56.269396] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:38.374 [2024-10-15 09:17:56.269408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:38.634 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.634 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:38.634 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:38.634 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:38.634 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:38.634 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:38.634 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:38.634 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.634 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.634 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.634 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.634 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.634 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:38.634 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.634 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.634 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.634 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.634 "name": "Existed_Raid", 00:18:38.634 "uuid": "08616161-8ae4-42ad-a468-a846399a4b6e", 00:18:38.634 "strip_size_kb": 0, 00:18:38.634 "state": "configuring", 00:18:38.634 "raid_level": "raid1", 00:18:38.634 "superblock": true, 00:18:38.634 "num_base_bdevs": 2, 00:18:38.634 "num_base_bdevs_discovered": 0, 00:18:38.634 "num_base_bdevs_operational": 2, 00:18:38.634 "base_bdevs_list": [ 00:18:38.634 { 00:18:38.634 "name": "BaseBdev1", 00:18:38.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.634 "is_configured": false, 00:18:38.634 "data_offset": 0, 00:18:38.634 "data_size": 0 00:18:38.634 }, 00:18:38.634 { 00:18:38.634 "name": "BaseBdev2", 00:18:38.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.634 "is_configured": false, 00:18:38.634 "data_offset": 0, 00:18:38.634 "data_size": 0 00:18:38.634 } 00:18:38.634 ] 00:18:38.634 }' 00:18:38.634 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.634 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.893 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:38.893 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.893 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.894 [2024-10-15 09:17:56.740436] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:38.894 [2024-10-15 09:17:56.740478] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:38.894 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.894 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:38.894 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.894 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.894 [2024-10-15 09:17:56.752458] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:38.894 [2024-10-15 09:17:56.752505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:38.894 [2024-10-15 09:17:56.752516] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:38.894 [2024-10-15 09:17:56.752528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:38.894 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.894 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:18:38.894 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.894 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.163 [2024-10-15 09:17:56.802274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:39.163 BaseBdev1 00:18:39.163 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.163 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:39.163 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:39.163 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:39.163 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:18:39.163 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:39.163 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:39.163 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:39.163 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.163 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.163 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.163 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:39.163 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.163 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.163 [ 00:18:39.163 { 00:18:39.163 "name": "BaseBdev1", 00:18:39.163 "aliases": [ 00:18:39.163 "470b98f4-5c2f-4644-99f0-ca16ee5bdccb" 00:18:39.163 ], 00:18:39.163 "product_name": "Malloc disk", 00:18:39.163 "block_size": 4128, 00:18:39.163 "num_blocks": 8192, 00:18:39.163 "uuid": "470b98f4-5c2f-4644-99f0-ca16ee5bdccb", 00:18:39.163 "md_size": 32, 00:18:39.163 "md_interleave": true, 00:18:39.163 "dif_type": 0, 00:18:39.163 "assigned_rate_limits": { 00:18:39.163 "rw_ios_per_sec": 0, 00:18:39.163 "rw_mbytes_per_sec": 0, 00:18:39.163 "r_mbytes_per_sec": 0, 00:18:39.163 "w_mbytes_per_sec": 0 00:18:39.163 }, 00:18:39.163 "claimed": true, 00:18:39.163 "claim_type": "exclusive_write", 00:18:39.163 "zoned": false, 00:18:39.163 "supported_io_types": { 00:18:39.163 "read": true, 00:18:39.163 "write": true, 00:18:39.163 "unmap": true, 00:18:39.163 "flush": true, 00:18:39.163 "reset": true, 00:18:39.163 "nvme_admin": false, 00:18:39.163 "nvme_io": false, 00:18:39.163 "nvme_io_md": false, 00:18:39.163 "write_zeroes": true, 00:18:39.163 "zcopy": true, 00:18:39.163 "get_zone_info": false, 00:18:39.163 "zone_management": false, 00:18:39.163 "zone_append": false, 00:18:39.163 "compare": false, 00:18:39.163 "compare_and_write": false, 00:18:39.163 "abort": true, 00:18:39.163 "seek_hole": false, 00:18:39.163 "seek_data": false, 00:18:39.163 "copy": true, 00:18:39.163 "nvme_iov_md": false 00:18:39.163 }, 00:18:39.163 "memory_domains": [ 00:18:39.163 { 00:18:39.163 "dma_device_id": "system", 00:18:39.163 "dma_device_type": 1 00:18:39.163 }, 00:18:39.163 { 00:18:39.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.163 "dma_device_type": 2 00:18:39.163 } 00:18:39.163 ], 00:18:39.163 "driver_specific": {} 00:18:39.163 } 00:18:39.163 ] 00:18:39.163 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.163 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:18:39.163 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:39.163 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:39.163 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:39.163 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.163 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.163 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:39.163 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.163 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.163 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.163 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.163 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.163 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:39.163 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.163 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.163 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.163 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.163 "name": "Existed_Raid", 00:18:39.163 "uuid": "83ddb2b2-7341-4fb4-a366-dfbdec66af6f", 00:18:39.163 "strip_size_kb": 0, 00:18:39.163 "state": "configuring", 00:18:39.163 "raid_level": "raid1", 00:18:39.163 "superblock": true, 00:18:39.163 "num_base_bdevs": 2, 00:18:39.163 "num_base_bdevs_discovered": 1, 00:18:39.163 "num_base_bdevs_operational": 2, 00:18:39.163 "base_bdevs_list": [ 00:18:39.163 { 00:18:39.163 "name": "BaseBdev1", 00:18:39.163 "uuid": "470b98f4-5c2f-4644-99f0-ca16ee5bdccb", 00:18:39.163 "is_configured": true, 00:18:39.163 "data_offset": 256, 00:18:39.163 "data_size": 7936 00:18:39.163 }, 00:18:39.163 { 00:18:39.163 "name": "BaseBdev2", 00:18:39.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.163 "is_configured": false, 00:18:39.163 "data_offset": 0, 00:18:39.163 "data_size": 0 00:18:39.163 } 00:18:39.163 ] 00:18:39.163 }' 00:18:39.163 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.163 09:17:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.440 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:39.440 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.440 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.440 [2024-10-15 09:17:57.293598] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:39.440 [2024-10-15 09:17:57.293683] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:39.440 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.440 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:39.440 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.440 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.440 [2024-10-15 09:17:57.301659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:39.440 [2024-10-15 09:17:57.303636] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:39.440 [2024-10-15 09:17:57.303680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:39.440 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.440 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:39.440 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:39.440 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:39.440 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:39.440 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:39.441 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.441 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.441 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:39.441 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.441 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.441 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.441 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.441 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.441 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:39.441 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.441 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.441 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.700 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.700 "name": "Existed_Raid", 00:18:39.700 "uuid": "03c9afff-0c5a-459f-baf4-9693aebbb0c9", 00:18:39.700 "strip_size_kb": 0, 00:18:39.700 "state": "configuring", 00:18:39.700 "raid_level": "raid1", 00:18:39.700 "superblock": true, 00:18:39.700 "num_base_bdevs": 2, 00:18:39.700 "num_base_bdevs_discovered": 1, 00:18:39.700 "num_base_bdevs_operational": 2, 00:18:39.700 "base_bdevs_list": [ 00:18:39.700 { 00:18:39.700 "name": "BaseBdev1", 00:18:39.700 "uuid": "470b98f4-5c2f-4644-99f0-ca16ee5bdccb", 00:18:39.700 "is_configured": true, 00:18:39.700 "data_offset": 256, 00:18:39.700 "data_size": 7936 00:18:39.700 }, 00:18:39.700 { 00:18:39.700 "name": "BaseBdev2", 00:18:39.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.700 "is_configured": false, 00:18:39.700 "data_offset": 0, 00:18:39.700 "data_size": 0 00:18:39.700 } 00:18:39.700 ] 00:18:39.700 }' 00:18:39.700 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.700 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.960 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:18:39.960 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.960 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.960 [2024-10-15 09:17:57.805129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:39.960 [2024-10-15 09:17:57.805393] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:39.960 [2024-10-15 09:17:57.805409] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:39.960 [2024-10-15 09:17:57.805520] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:39.960 [2024-10-15 09:17:57.805630] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:39.960 [2024-10-15 09:17:57.805644] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:39.960 [2024-10-15 09:17:57.805756] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.960 BaseBdev2 00:18:39.960 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.960 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:39.960 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:39.960 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:39.960 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:18:39.960 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:39.960 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:39.960 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:39.960 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.960 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.960 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.960 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:39.960 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.960 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.960 [ 00:18:39.960 { 00:18:39.960 "name": "BaseBdev2", 00:18:39.960 "aliases": [ 00:18:39.960 "437a7175-82d9-41a7-9dce-f951ad547aa2" 00:18:39.960 ], 00:18:39.960 "product_name": "Malloc disk", 00:18:39.960 "block_size": 4128, 00:18:39.960 "num_blocks": 8192, 00:18:39.960 "uuid": "437a7175-82d9-41a7-9dce-f951ad547aa2", 00:18:39.960 "md_size": 32, 00:18:39.960 "md_interleave": true, 00:18:39.960 "dif_type": 0, 00:18:39.960 "assigned_rate_limits": { 00:18:39.960 "rw_ios_per_sec": 0, 00:18:39.960 "rw_mbytes_per_sec": 0, 00:18:39.960 "r_mbytes_per_sec": 0, 00:18:39.960 "w_mbytes_per_sec": 0 00:18:39.960 }, 00:18:39.961 "claimed": true, 00:18:39.961 "claim_type": "exclusive_write", 00:18:39.961 "zoned": false, 00:18:39.961 "supported_io_types": { 00:18:39.961 "read": true, 00:18:39.961 "write": true, 00:18:39.961 "unmap": true, 00:18:39.961 "flush": true, 00:18:39.961 "reset": true, 00:18:39.961 "nvme_admin": false, 00:18:39.961 "nvme_io": false, 00:18:39.961 "nvme_io_md": false, 00:18:39.961 "write_zeroes": true, 00:18:39.961 "zcopy": true, 00:18:39.961 "get_zone_info": false, 00:18:39.961 "zone_management": false, 00:18:39.961 "zone_append": false, 00:18:39.961 "compare": false, 00:18:39.961 "compare_and_write": false, 00:18:39.961 "abort": true, 00:18:39.961 "seek_hole": false, 00:18:39.961 "seek_data": false, 00:18:39.961 "copy": true, 00:18:39.961 "nvme_iov_md": false 00:18:39.961 }, 00:18:39.961 "memory_domains": [ 00:18:39.961 { 00:18:39.961 "dma_device_id": "system", 00:18:39.961 "dma_device_type": 1 00:18:39.961 }, 00:18:39.961 { 00:18:39.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.961 "dma_device_type": 2 00:18:39.961 } 00:18:39.961 ], 00:18:39.961 "driver_specific": {} 00:18:39.961 } 00:18:39.961 ] 00:18:39.961 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.961 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:18:39.961 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:39.961 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:39.961 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:39.961 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:39.961 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.961 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.961 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.961 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:39.961 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.961 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.961 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.961 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.961 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.961 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.961 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.961 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:40.221 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.221 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.221 "name": "Existed_Raid", 00:18:40.221 "uuid": "03c9afff-0c5a-459f-baf4-9693aebbb0c9", 00:18:40.221 "strip_size_kb": 0, 00:18:40.221 "state": "online", 00:18:40.221 "raid_level": "raid1", 00:18:40.221 "superblock": true, 00:18:40.221 "num_base_bdevs": 2, 00:18:40.221 "num_base_bdevs_discovered": 2, 00:18:40.221 "num_base_bdevs_operational": 2, 00:18:40.221 "base_bdevs_list": [ 00:18:40.221 { 00:18:40.221 "name": "BaseBdev1", 00:18:40.221 "uuid": "470b98f4-5c2f-4644-99f0-ca16ee5bdccb", 00:18:40.221 "is_configured": true, 00:18:40.221 "data_offset": 256, 00:18:40.221 "data_size": 7936 00:18:40.221 }, 00:18:40.221 { 00:18:40.221 "name": "BaseBdev2", 00:18:40.221 "uuid": "437a7175-82d9-41a7-9dce-f951ad547aa2", 00:18:40.221 "is_configured": true, 00:18:40.221 "data_offset": 256, 00:18:40.221 "data_size": 7936 00:18:40.221 } 00:18:40.221 ] 00:18:40.221 }' 00:18:40.221 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.221 09:17:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.480 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:40.480 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:40.480 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:40.480 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:40.481 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:40.481 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:40.481 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:40.481 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:40.481 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.481 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.481 [2024-10-15 09:17:58.204929] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:40.481 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.481 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:40.481 "name": "Existed_Raid", 00:18:40.481 "aliases": [ 00:18:40.481 "03c9afff-0c5a-459f-baf4-9693aebbb0c9" 00:18:40.481 ], 00:18:40.481 "product_name": "Raid Volume", 00:18:40.481 "block_size": 4128, 00:18:40.481 "num_blocks": 7936, 00:18:40.481 "uuid": "03c9afff-0c5a-459f-baf4-9693aebbb0c9", 00:18:40.481 "md_size": 32, 00:18:40.481 "md_interleave": true, 00:18:40.481 "dif_type": 0, 00:18:40.481 "assigned_rate_limits": { 00:18:40.481 "rw_ios_per_sec": 0, 00:18:40.481 "rw_mbytes_per_sec": 0, 00:18:40.481 "r_mbytes_per_sec": 0, 00:18:40.481 "w_mbytes_per_sec": 0 00:18:40.481 }, 00:18:40.481 "claimed": false, 00:18:40.481 "zoned": false, 00:18:40.481 "supported_io_types": { 00:18:40.481 "read": true, 00:18:40.481 "write": true, 00:18:40.481 "unmap": false, 00:18:40.481 "flush": false, 00:18:40.481 "reset": true, 00:18:40.481 "nvme_admin": false, 00:18:40.481 "nvme_io": false, 00:18:40.481 "nvme_io_md": false, 00:18:40.481 "write_zeroes": true, 00:18:40.481 "zcopy": false, 00:18:40.481 "get_zone_info": false, 00:18:40.481 "zone_management": false, 00:18:40.481 "zone_append": false, 00:18:40.481 "compare": false, 00:18:40.481 "compare_and_write": false, 00:18:40.481 "abort": false, 00:18:40.481 "seek_hole": false, 00:18:40.481 "seek_data": false, 00:18:40.481 "copy": false, 00:18:40.481 "nvme_iov_md": false 00:18:40.481 }, 00:18:40.481 "memory_domains": [ 00:18:40.481 { 00:18:40.481 "dma_device_id": "system", 00:18:40.481 "dma_device_type": 1 00:18:40.481 }, 00:18:40.481 { 00:18:40.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.481 "dma_device_type": 2 00:18:40.481 }, 00:18:40.481 { 00:18:40.481 "dma_device_id": "system", 00:18:40.481 "dma_device_type": 1 00:18:40.481 }, 00:18:40.481 { 00:18:40.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.481 "dma_device_type": 2 00:18:40.481 } 00:18:40.481 ], 00:18:40.481 "driver_specific": { 00:18:40.481 "raid": { 00:18:40.481 "uuid": "03c9afff-0c5a-459f-baf4-9693aebbb0c9", 00:18:40.481 "strip_size_kb": 0, 00:18:40.481 "state": "online", 00:18:40.481 "raid_level": "raid1", 00:18:40.481 "superblock": true, 00:18:40.481 "num_base_bdevs": 2, 00:18:40.481 "num_base_bdevs_discovered": 2, 00:18:40.481 "num_base_bdevs_operational": 2, 00:18:40.481 "base_bdevs_list": [ 00:18:40.481 { 00:18:40.481 "name": "BaseBdev1", 00:18:40.481 "uuid": "470b98f4-5c2f-4644-99f0-ca16ee5bdccb", 00:18:40.481 "is_configured": true, 00:18:40.481 "data_offset": 256, 00:18:40.481 "data_size": 7936 00:18:40.481 }, 00:18:40.481 { 00:18:40.481 "name": "BaseBdev2", 00:18:40.481 "uuid": "437a7175-82d9-41a7-9dce-f951ad547aa2", 00:18:40.481 "is_configured": true, 00:18:40.481 "data_offset": 256, 00:18:40.481 "data_size": 7936 00:18:40.481 } 00:18:40.481 ] 00:18:40.481 } 00:18:40.481 } 00:18:40.481 }' 00:18:40.481 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:40.481 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:40.481 BaseBdev2' 00:18:40.481 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:40.481 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:40.481 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:40.481 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:40.481 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:40.481 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.481 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.481 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.740 [2024-10-15 09:17:58.436204] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.740 "name": "Existed_Raid", 00:18:40.740 "uuid": "03c9afff-0c5a-459f-baf4-9693aebbb0c9", 00:18:40.740 "strip_size_kb": 0, 00:18:40.740 "state": "online", 00:18:40.740 "raid_level": "raid1", 00:18:40.740 "superblock": true, 00:18:40.740 "num_base_bdevs": 2, 00:18:40.740 "num_base_bdevs_discovered": 1, 00:18:40.740 "num_base_bdevs_operational": 1, 00:18:40.740 "base_bdevs_list": [ 00:18:40.740 { 00:18:40.740 "name": null, 00:18:40.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.740 "is_configured": false, 00:18:40.740 "data_offset": 0, 00:18:40.740 "data_size": 7936 00:18:40.740 }, 00:18:40.740 { 00:18:40.740 "name": "BaseBdev2", 00:18:40.740 "uuid": "437a7175-82d9-41a7-9dce-f951ad547aa2", 00:18:40.740 "is_configured": true, 00:18:40.740 "data_offset": 256, 00:18:40.740 "data_size": 7936 00:18:40.740 } 00:18:40.740 ] 00:18:40.740 }' 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.740 09:17:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.308 09:17:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:41.308 09:17:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:41.308 09:17:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:41.308 09:17:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.308 09:17:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.308 09:17:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.308 09:17:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.308 09:17:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:41.308 09:17:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:41.308 09:17:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:41.308 09:17:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.308 09:17:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.308 [2024-10-15 09:17:59.065832] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:41.308 [2024-10-15 09:17:59.065954] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:41.308 [2024-10-15 09:17:59.166603] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:41.308 [2024-10-15 09:17:59.166670] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:41.308 [2024-10-15 09:17:59.166702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:41.308 09:17:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.308 09:17:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:41.308 09:17:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:41.308 09:17:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:41.308 09:17:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.308 09:17:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.308 09:17:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.308 09:17:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.566 09:17:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:41.566 09:17:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:41.566 09:17:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:41.566 09:17:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88796 00:18:41.566 09:17:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 88796 ']' 00:18:41.566 09:17:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 88796 00:18:41.566 09:17:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:18:41.566 09:17:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:41.566 09:17:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88796 00:18:41.566 killing process with pid 88796 00:18:41.566 09:17:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:41.567 09:17:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:41.567 09:17:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88796' 00:18:41.567 09:17:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 88796 00:18:41.567 [2024-10-15 09:17:59.265554] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:41.567 09:17:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 88796 00:18:41.567 [2024-10-15 09:17:59.282778] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:42.956 09:18:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:18:42.957 00:18:42.957 real 0m5.195s 00:18:42.957 user 0m7.486s 00:18:42.957 sys 0m0.895s 00:18:42.957 09:18:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:42.957 09:18:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.957 ************************************ 00:18:42.957 END TEST raid_state_function_test_sb_md_interleaved 00:18:42.957 ************************************ 00:18:42.957 09:18:00 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:18:42.957 09:18:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:42.957 09:18:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:42.957 09:18:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:42.957 ************************************ 00:18:42.957 START TEST raid_superblock_test_md_interleaved 00:18:42.957 ************************************ 00:18:42.957 09:18:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:18:42.957 09:18:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:42.957 09:18:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:42.957 09:18:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:42.957 09:18:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:42.957 09:18:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:42.957 09:18:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:42.957 09:18:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:42.957 09:18:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:42.957 09:18:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:42.957 09:18:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:42.957 09:18:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:42.957 09:18:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:42.957 09:18:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:42.957 09:18:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:42.957 09:18:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:42.957 09:18:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89043 00:18:42.957 09:18:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:42.957 09:18:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89043 00:18:42.957 09:18:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 89043 ']' 00:18:42.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.957 09:18:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.957 09:18:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:42.957 09:18:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.957 09:18:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:42.957 09:18:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.957 [2024-10-15 09:18:00.630781] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:18:42.957 [2024-10-15 09:18:00.631071] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89043 ] 00:18:42.957 [2024-10-15 09:18:00.799285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.216 [2024-10-15 09:18:00.925608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.474 [2024-10-15 09:18:01.157075] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:43.474 [2024-10-15 09:18:01.157211] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.734 malloc1 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.734 [2024-10-15 09:18:01.546967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:43.734 [2024-10-15 09:18:01.547094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.734 [2024-10-15 09:18:01.547178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:43.734 [2024-10-15 09:18:01.547215] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.734 [2024-10-15 09:18:01.549131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.734 [2024-10-15 09:18:01.549205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:43.734 pt1 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.734 malloc2 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.734 [2024-10-15 09:18:01.612088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:43.734 [2024-10-15 09:18:01.612151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.734 [2024-10-15 09:18:01.612175] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:43.734 [2024-10-15 09:18:01.612184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.734 [2024-10-15 09:18:01.614325] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.734 [2024-10-15 09:18:01.614378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:43.734 pt2 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.734 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.734 [2024-10-15 09:18:01.624158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:43.735 [2024-10-15 09:18:01.626391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:43.735 [2024-10-15 09:18:01.626714] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:43.735 [2024-10-15 09:18:01.626738] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:43.735 [2024-10-15 09:18:01.626877] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:43.735 [2024-10-15 09:18:01.627006] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:43.735 [2024-10-15 09:18:01.627037] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:43.735 [2024-10-15 09:18:01.627160] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.994 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.994 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:43.994 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.994 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.994 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:43.994 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:43.994 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:43.994 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.994 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.994 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.994 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.994 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.994 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.994 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.994 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.994 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.994 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.994 "name": "raid_bdev1", 00:18:43.994 "uuid": "5ddb1a80-43bb-4a2e-b7e7-58b8721200f8", 00:18:43.994 "strip_size_kb": 0, 00:18:43.994 "state": "online", 00:18:43.994 "raid_level": "raid1", 00:18:43.994 "superblock": true, 00:18:43.994 "num_base_bdevs": 2, 00:18:43.994 "num_base_bdevs_discovered": 2, 00:18:43.994 "num_base_bdevs_operational": 2, 00:18:43.994 "base_bdevs_list": [ 00:18:43.994 { 00:18:43.994 "name": "pt1", 00:18:43.994 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:43.994 "is_configured": true, 00:18:43.994 "data_offset": 256, 00:18:43.994 "data_size": 7936 00:18:43.994 }, 00:18:43.994 { 00:18:43.994 "name": "pt2", 00:18:43.994 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:43.994 "is_configured": true, 00:18:43.994 "data_offset": 256, 00:18:43.994 "data_size": 7936 00:18:43.994 } 00:18:43.994 ] 00:18:43.994 }' 00:18:43.994 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.994 09:18:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.253 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:44.253 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:44.253 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:44.253 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:44.253 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:44.253 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:44.253 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:44.253 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:44.253 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.253 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.253 [2024-10-15 09:18:02.103653] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:44.253 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.253 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:44.253 "name": "raid_bdev1", 00:18:44.253 "aliases": [ 00:18:44.253 "5ddb1a80-43bb-4a2e-b7e7-58b8721200f8" 00:18:44.253 ], 00:18:44.253 "product_name": "Raid Volume", 00:18:44.253 "block_size": 4128, 00:18:44.253 "num_blocks": 7936, 00:18:44.253 "uuid": "5ddb1a80-43bb-4a2e-b7e7-58b8721200f8", 00:18:44.253 "md_size": 32, 00:18:44.253 "md_interleave": true, 00:18:44.253 "dif_type": 0, 00:18:44.253 "assigned_rate_limits": { 00:18:44.253 "rw_ios_per_sec": 0, 00:18:44.253 "rw_mbytes_per_sec": 0, 00:18:44.253 "r_mbytes_per_sec": 0, 00:18:44.253 "w_mbytes_per_sec": 0 00:18:44.253 }, 00:18:44.253 "claimed": false, 00:18:44.253 "zoned": false, 00:18:44.253 "supported_io_types": { 00:18:44.253 "read": true, 00:18:44.253 "write": true, 00:18:44.253 "unmap": false, 00:18:44.253 "flush": false, 00:18:44.253 "reset": true, 00:18:44.253 "nvme_admin": false, 00:18:44.253 "nvme_io": false, 00:18:44.253 "nvme_io_md": false, 00:18:44.253 "write_zeroes": true, 00:18:44.253 "zcopy": false, 00:18:44.253 "get_zone_info": false, 00:18:44.253 "zone_management": false, 00:18:44.253 "zone_append": false, 00:18:44.253 "compare": false, 00:18:44.253 "compare_and_write": false, 00:18:44.253 "abort": false, 00:18:44.253 "seek_hole": false, 00:18:44.253 "seek_data": false, 00:18:44.253 "copy": false, 00:18:44.253 "nvme_iov_md": false 00:18:44.253 }, 00:18:44.253 "memory_domains": [ 00:18:44.253 { 00:18:44.253 "dma_device_id": "system", 00:18:44.253 "dma_device_type": 1 00:18:44.253 }, 00:18:44.253 { 00:18:44.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.253 "dma_device_type": 2 00:18:44.253 }, 00:18:44.253 { 00:18:44.253 "dma_device_id": "system", 00:18:44.253 "dma_device_type": 1 00:18:44.253 }, 00:18:44.253 { 00:18:44.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.253 "dma_device_type": 2 00:18:44.253 } 00:18:44.253 ], 00:18:44.253 "driver_specific": { 00:18:44.253 "raid": { 00:18:44.253 "uuid": "5ddb1a80-43bb-4a2e-b7e7-58b8721200f8", 00:18:44.253 "strip_size_kb": 0, 00:18:44.253 "state": "online", 00:18:44.253 "raid_level": "raid1", 00:18:44.253 "superblock": true, 00:18:44.253 "num_base_bdevs": 2, 00:18:44.253 "num_base_bdevs_discovered": 2, 00:18:44.253 "num_base_bdevs_operational": 2, 00:18:44.253 "base_bdevs_list": [ 00:18:44.253 { 00:18:44.253 "name": "pt1", 00:18:44.253 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:44.253 "is_configured": true, 00:18:44.253 "data_offset": 256, 00:18:44.253 "data_size": 7936 00:18:44.253 }, 00:18:44.253 { 00:18:44.253 "name": "pt2", 00:18:44.253 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:44.253 "is_configured": true, 00:18:44.253 "data_offset": 256, 00:18:44.253 "data_size": 7936 00:18:44.253 } 00:18:44.253 ] 00:18:44.253 } 00:18:44.253 } 00:18:44.253 }' 00:18:44.253 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:44.513 pt2' 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:44.513 [2024-10-15 09:18:02.299291] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5ddb1a80-43bb-4a2e-b7e7-58b8721200f8 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 5ddb1a80-43bb-4a2e-b7e7-58b8721200f8 ']' 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.513 [2024-10-15 09:18:02.346843] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:44.513 [2024-10-15 09:18:02.346914] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:44.513 [2024-10-15 09:18:02.347013] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:44.513 [2024-10-15 09:18:02.347077] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:44.513 [2024-10-15 09:18:02.347090] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.513 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:44.514 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:44.514 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:44.514 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:44.514 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.514 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.772 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.772 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:44.772 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:44.772 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.772 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.772 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.772 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:44.772 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:44.772 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.772 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.772 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.772 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:44.772 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:44.772 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:18:44.772 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.773 [2024-10-15 09:18:02.490661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:44.773 [2024-10-15 09:18:02.492712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:44.773 [2024-10-15 09:18:02.492832] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:44.773 [2024-10-15 09:18:02.492937] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:44.773 [2024-10-15 09:18:02.492994] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:44.773 [2024-10-15 09:18:02.493024] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:44.773 request: 00:18:44.773 { 00:18:44.773 "name": "raid_bdev1", 00:18:44.773 "raid_level": "raid1", 00:18:44.773 "base_bdevs": [ 00:18:44.773 "malloc1", 00:18:44.773 "malloc2" 00:18:44.773 ], 00:18:44.773 "superblock": false, 00:18:44.773 "method": "bdev_raid_create", 00:18:44.773 "req_id": 1 00:18:44.773 } 00:18:44.773 Got JSON-RPC error response 00:18:44.773 response: 00:18:44.773 { 00:18:44.773 "code": -17, 00:18:44.773 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:44.773 } 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.773 [2024-10-15 09:18:02.562501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:44.773 [2024-10-15 09:18:02.562612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:44.773 [2024-10-15 09:18:02.562651] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:44.773 [2024-10-15 09:18:02.562695] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:44.773 [2024-10-15 09:18:02.564713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:44.773 [2024-10-15 09:18:02.564785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:44.773 [2024-10-15 09:18:02.564860] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:44.773 [2024-10-15 09:18:02.564966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:44.773 pt1 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.773 "name": "raid_bdev1", 00:18:44.773 "uuid": "5ddb1a80-43bb-4a2e-b7e7-58b8721200f8", 00:18:44.773 "strip_size_kb": 0, 00:18:44.773 "state": "configuring", 00:18:44.773 "raid_level": "raid1", 00:18:44.773 "superblock": true, 00:18:44.773 "num_base_bdevs": 2, 00:18:44.773 "num_base_bdevs_discovered": 1, 00:18:44.773 "num_base_bdevs_operational": 2, 00:18:44.773 "base_bdevs_list": [ 00:18:44.773 { 00:18:44.773 "name": "pt1", 00:18:44.773 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:44.773 "is_configured": true, 00:18:44.773 "data_offset": 256, 00:18:44.773 "data_size": 7936 00:18:44.773 }, 00:18:44.773 { 00:18:44.773 "name": null, 00:18:44.773 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:44.773 "is_configured": false, 00:18:44.773 "data_offset": 256, 00:18:44.773 "data_size": 7936 00:18:44.773 } 00:18:44.773 ] 00:18:44.773 }' 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.773 09:18:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.343 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:45.343 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:45.343 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:45.343 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:45.343 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.343 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.343 [2024-10-15 09:18:03.049800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:45.343 [2024-10-15 09:18:03.049993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.343 [2024-10-15 09:18:03.050047] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:45.343 [2024-10-15 09:18:03.050091] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.343 [2024-10-15 09:18:03.050332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.343 [2024-10-15 09:18:03.050392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:45.343 [2024-10-15 09:18:03.050492] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:45.343 [2024-10-15 09:18:03.050553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:45.343 [2024-10-15 09:18:03.050716] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:45.343 [2024-10-15 09:18:03.050763] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:45.343 [2024-10-15 09:18:03.050888] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:45.343 [2024-10-15 09:18:03.050990] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:45.343 [2024-10-15 09:18:03.051024] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:45.343 [2024-10-15 09:18:03.051149] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:45.343 pt2 00:18:45.343 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.343 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:45.343 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:45.343 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:45.343 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:45.343 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:45.343 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:45.343 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:45.343 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:45.343 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.343 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.343 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.343 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.343 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.343 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.343 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.343 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.343 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.343 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.343 "name": "raid_bdev1", 00:18:45.343 "uuid": "5ddb1a80-43bb-4a2e-b7e7-58b8721200f8", 00:18:45.343 "strip_size_kb": 0, 00:18:45.343 "state": "online", 00:18:45.343 "raid_level": "raid1", 00:18:45.343 "superblock": true, 00:18:45.343 "num_base_bdevs": 2, 00:18:45.343 "num_base_bdevs_discovered": 2, 00:18:45.343 "num_base_bdevs_operational": 2, 00:18:45.343 "base_bdevs_list": [ 00:18:45.343 { 00:18:45.343 "name": "pt1", 00:18:45.343 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:45.343 "is_configured": true, 00:18:45.343 "data_offset": 256, 00:18:45.343 "data_size": 7936 00:18:45.343 }, 00:18:45.343 { 00:18:45.343 "name": "pt2", 00:18:45.343 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:45.343 "is_configured": true, 00:18:45.343 "data_offset": 256, 00:18:45.343 "data_size": 7936 00:18:45.343 } 00:18:45.343 ] 00:18:45.343 }' 00:18:45.343 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.343 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.912 [2024-10-15 09:18:03.529436] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:45.912 "name": "raid_bdev1", 00:18:45.912 "aliases": [ 00:18:45.912 "5ddb1a80-43bb-4a2e-b7e7-58b8721200f8" 00:18:45.912 ], 00:18:45.912 "product_name": "Raid Volume", 00:18:45.912 "block_size": 4128, 00:18:45.912 "num_blocks": 7936, 00:18:45.912 "uuid": "5ddb1a80-43bb-4a2e-b7e7-58b8721200f8", 00:18:45.912 "md_size": 32, 00:18:45.912 "md_interleave": true, 00:18:45.912 "dif_type": 0, 00:18:45.912 "assigned_rate_limits": { 00:18:45.912 "rw_ios_per_sec": 0, 00:18:45.912 "rw_mbytes_per_sec": 0, 00:18:45.912 "r_mbytes_per_sec": 0, 00:18:45.912 "w_mbytes_per_sec": 0 00:18:45.912 }, 00:18:45.912 "claimed": false, 00:18:45.912 "zoned": false, 00:18:45.912 "supported_io_types": { 00:18:45.912 "read": true, 00:18:45.912 "write": true, 00:18:45.912 "unmap": false, 00:18:45.912 "flush": false, 00:18:45.912 "reset": true, 00:18:45.912 "nvme_admin": false, 00:18:45.912 "nvme_io": false, 00:18:45.912 "nvme_io_md": false, 00:18:45.912 "write_zeroes": true, 00:18:45.912 "zcopy": false, 00:18:45.912 "get_zone_info": false, 00:18:45.912 "zone_management": false, 00:18:45.912 "zone_append": false, 00:18:45.912 "compare": false, 00:18:45.912 "compare_and_write": false, 00:18:45.912 "abort": false, 00:18:45.912 "seek_hole": false, 00:18:45.912 "seek_data": false, 00:18:45.912 "copy": false, 00:18:45.912 "nvme_iov_md": false 00:18:45.912 }, 00:18:45.912 "memory_domains": [ 00:18:45.912 { 00:18:45.912 "dma_device_id": "system", 00:18:45.912 "dma_device_type": 1 00:18:45.912 }, 00:18:45.912 { 00:18:45.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.912 "dma_device_type": 2 00:18:45.912 }, 00:18:45.912 { 00:18:45.912 "dma_device_id": "system", 00:18:45.912 "dma_device_type": 1 00:18:45.912 }, 00:18:45.912 { 00:18:45.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.912 "dma_device_type": 2 00:18:45.912 } 00:18:45.912 ], 00:18:45.912 "driver_specific": { 00:18:45.912 "raid": { 00:18:45.912 "uuid": "5ddb1a80-43bb-4a2e-b7e7-58b8721200f8", 00:18:45.912 "strip_size_kb": 0, 00:18:45.912 "state": "online", 00:18:45.912 "raid_level": "raid1", 00:18:45.912 "superblock": true, 00:18:45.912 "num_base_bdevs": 2, 00:18:45.912 "num_base_bdevs_discovered": 2, 00:18:45.912 "num_base_bdevs_operational": 2, 00:18:45.912 "base_bdevs_list": [ 00:18:45.912 { 00:18:45.912 "name": "pt1", 00:18:45.912 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:45.912 "is_configured": true, 00:18:45.912 "data_offset": 256, 00:18:45.912 "data_size": 7936 00:18:45.912 }, 00:18:45.912 { 00:18:45.912 "name": "pt2", 00:18:45.912 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:45.912 "is_configured": true, 00:18:45.912 "data_offset": 256, 00:18:45.912 "data_size": 7936 00:18:45.912 } 00:18:45.912 ] 00:18:45.912 } 00:18:45.912 } 00:18:45.912 }' 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:45.912 pt2' 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.912 [2024-10-15 09:18:03.749050] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 5ddb1a80-43bb-4a2e-b7e7-58b8721200f8 '!=' 5ddb1a80-43bb-4a2e-b7e7-58b8721200f8 ']' 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.912 [2024-10-15 09:18:03.780821] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.912 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.913 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.172 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.172 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.172 "name": "raid_bdev1", 00:18:46.172 "uuid": "5ddb1a80-43bb-4a2e-b7e7-58b8721200f8", 00:18:46.172 "strip_size_kb": 0, 00:18:46.172 "state": "online", 00:18:46.172 "raid_level": "raid1", 00:18:46.172 "superblock": true, 00:18:46.172 "num_base_bdevs": 2, 00:18:46.172 "num_base_bdevs_discovered": 1, 00:18:46.172 "num_base_bdevs_operational": 1, 00:18:46.172 "base_bdevs_list": [ 00:18:46.172 { 00:18:46.172 "name": null, 00:18:46.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.172 "is_configured": false, 00:18:46.172 "data_offset": 0, 00:18:46.172 "data_size": 7936 00:18:46.172 }, 00:18:46.172 { 00:18:46.172 "name": "pt2", 00:18:46.172 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:46.172 "is_configured": true, 00:18:46.172 "data_offset": 256, 00:18:46.172 "data_size": 7936 00:18:46.172 } 00:18:46.172 ] 00:18:46.172 }' 00:18:46.172 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.172 09:18:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.432 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:46.432 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.432 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.432 [2024-10-15 09:18:04.232011] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:46.432 [2024-10-15 09:18:04.232115] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:46.432 [2024-10-15 09:18:04.232218] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:46.432 [2024-10-15 09:18:04.232271] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:46.432 [2024-10-15 09:18:04.232284] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:46.432 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.432 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.432 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:46.432 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.432 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.432 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.432 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:46.432 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:46.432 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:46.432 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:46.432 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:46.433 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.433 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.433 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.433 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:46.433 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:46.433 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:46.433 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:46.433 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:18:46.433 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:46.433 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.433 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.433 [2024-10-15 09:18:04.303887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:46.433 [2024-10-15 09:18:04.304034] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.433 [2024-10-15 09:18:04.304074] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:46.433 [2024-10-15 09:18:04.304107] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.433 [2024-10-15 09:18:04.306155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.433 [2024-10-15 09:18:04.306255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:46.433 [2024-10-15 09:18:04.306360] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:46.433 [2024-10-15 09:18:04.306455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:46.433 [2024-10-15 09:18:04.306574] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:46.433 [2024-10-15 09:18:04.306618] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:46.433 [2024-10-15 09:18:04.306777] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:46.433 [2024-10-15 09:18:04.306898] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:46.433 [2024-10-15 09:18:04.306938] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:46.433 [2024-10-15 09:18:04.307058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.433 pt2 00:18:46.433 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.433 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:46.433 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.433 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.433 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:46.433 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:46.433 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:46.433 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.433 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.433 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.433 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.433 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.433 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.433 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.433 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.692 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.692 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.692 "name": "raid_bdev1", 00:18:46.692 "uuid": "5ddb1a80-43bb-4a2e-b7e7-58b8721200f8", 00:18:46.692 "strip_size_kb": 0, 00:18:46.692 "state": "online", 00:18:46.692 "raid_level": "raid1", 00:18:46.692 "superblock": true, 00:18:46.692 "num_base_bdevs": 2, 00:18:46.692 "num_base_bdevs_discovered": 1, 00:18:46.692 "num_base_bdevs_operational": 1, 00:18:46.692 "base_bdevs_list": [ 00:18:46.692 { 00:18:46.692 "name": null, 00:18:46.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.692 "is_configured": false, 00:18:46.692 "data_offset": 256, 00:18:46.692 "data_size": 7936 00:18:46.692 }, 00:18:46.692 { 00:18:46.692 "name": "pt2", 00:18:46.692 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:46.692 "is_configured": true, 00:18:46.692 "data_offset": 256, 00:18:46.692 "data_size": 7936 00:18:46.692 } 00:18:46.692 ] 00:18:46.692 }' 00:18:46.692 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.692 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.952 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:46.952 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.952 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.952 [2024-10-15 09:18:04.779026] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:46.952 [2024-10-15 09:18:04.779057] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:46.952 [2024-10-15 09:18:04.779142] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:46.952 [2024-10-15 09:18:04.779196] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:46.952 [2024-10-15 09:18:04.779206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:46.952 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.952 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:46.952 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.952 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.952 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.952 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.952 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:46.952 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:46.952 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:46.952 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:46.952 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.952 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.952 [2024-10-15 09:18:04.826984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:46.952 [2024-10-15 09:18:04.827050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.952 [2024-10-15 09:18:04.827071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:46.952 [2024-10-15 09:18:04.827081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.952 [2024-10-15 09:18:04.829050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.952 [2024-10-15 09:18:04.829131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:46.952 [2024-10-15 09:18:04.829225] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:46.952 [2024-10-15 09:18:04.829279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:46.952 [2024-10-15 09:18:04.829390] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:46.953 [2024-10-15 09:18:04.829400] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:46.953 [2024-10-15 09:18:04.829420] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:46.953 [2024-10-15 09:18:04.829485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:46.953 [2024-10-15 09:18:04.829561] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:46.953 [2024-10-15 09:18:04.829569] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:46.953 [2024-10-15 09:18:04.829664] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:46.953 [2024-10-15 09:18:04.829749] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:46.953 [2024-10-15 09:18:04.829763] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:46.953 [2024-10-15 09:18:04.829847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.953 pt1 00:18:46.953 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.953 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:46.953 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:46.953 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.953 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.953 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:46.953 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:46.953 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:46.953 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.953 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.953 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.953 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.953 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.953 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.953 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.953 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.212 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.212 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.212 "name": "raid_bdev1", 00:18:47.212 "uuid": "5ddb1a80-43bb-4a2e-b7e7-58b8721200f8", 00:18:47.212 "strip_size_kb": 0, 00:18:47.212 "state": "online", 00:18:47.212 "raid_level": "raid1", 00:18:47.212 "superblock": true, 00:18:47.212 "num_base_bdevs": 2, 00:18:47.212 "num_base_bdevs_discovered": 1, 00:18:47.212 "num_base_bdevs_operational": 1, 00:18:47.212 "base_bdevs_list": [ 00:18:47.212 { 00:18:47.212 "name": null, 00:18:47.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.212 "is_configured": false, 00:18:47.212 "data_offset": 256, 00:18:47.212 "data_size": 7936 00:18:47.212 }, 00:18:47.212 { 00:18:47.212 "name": "pt2", 00:18:47.212 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:47.212 "is_configured": true, 00:18:47.212 "data_offset": 256, 00:18:47.212 "data_size": 7936 00:18:47.212 } 00:18:47.212 ] 00:18:47.212 }' 00:18:47.212 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.212 09:18:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.471 09:18:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:47.471 09:18:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:47.471 09:18:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.471 09:18:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.471 09:18:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.471 09:18:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:47.471 09:18:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:47.471 09:18:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:47.471 09:18:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.471 09:18:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.471 [2024-10-15 09:18:05.330387] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:47.471 09:18:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.471 09:18:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 5ddb1a80-43bb-4a2e-b7e7-58b8721200f8 '!=' 5ddb1a80-43bb-4a2e-b7e7-58b8721200f8 ']' 00:18:47.471 09:18:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89043 00:18:47.730 09:18:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 89043 ']' 00:18:47.730 09:18:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 89043 00:18:47.730 09:18:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:18:47.730 09:18:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:47.730 09:18:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89043 00:18:47.730 killing process with pid 89043 00:18:47.730 09:18:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:47.730 09:18:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:47.730 09:18:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89043' 00:18:47.730 09:18:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 89043 00:18:47.730 [2024-10-15 09:18:05.409848] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:47.730 [2024-10-15 09:18:05.409970] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:47.730 09:18:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 89043 00:18:47.730 [2024-10-15 09:18:05.410026] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:47.730 [2024-10-15 09:18:05.410047] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:47.730 [2024-10-15 09:18:05.625320] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:49.143 09:18:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:18:49.143 00:18:49.143 real 0m6.212s 00:18:49.143 user 0m9.458s 00:18:49.143 sys 0m1.112s 00:18:49.143 09:18:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:49.143 09:18:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.143 ************************************ 00:18:49.143 END TEST raid_superblock_test_md_interleaved 00:18:49.143 ************************************ 00:18:49.143 09:18:06 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:49.143 09:18:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:49.143 09:18:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:49.143 09:18:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:49.143 ************************************ 00:18:49.143 START TEST raid_rebuild_test_sb_md_interleaved 00:18:49.143 ************************************ 00:18:49.143 09:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:18:49.143 09:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:49.143 09:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:49.143 09:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:49.143 09:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:49.143 09:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:18:49.143 09:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:49.143 09:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:49.143 09:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:49.143 09:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:49.143 09:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:49.143 09:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:49.143 09:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:49.143 09:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:49.143 09:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:49.143 09:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:49.143 09:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:49.143 09:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:49.143 09:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:49.143 09:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:49.143 09:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:49.143 09:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:49.143 09:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:49.143 09:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:49.143 09:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:49.143 09:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89372 00:18:49.143 09:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:49.143 09:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89372 00:18:49.143 09:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 89372 ']' 00:18:49.143 09:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.143 09:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:49.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.143 09:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.143 09:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:49.143 09:18:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.143 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:49.143 Zero copy mechanism will not be used. 00:18:49.143 [2024-10-15 09:18:06.910056] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:18:49.143 [2024-10-15 09:18:06.910185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89372 ] 00:18:49.431 [2024-10-15 09:18:07.071777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.431 [2024-10-15 09:18:07.192918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.691 [2024-10-15 09:18:07.390625] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:49.691 [2024-10-15 09:18:07.390728] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:49.951 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:49.951 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:18:49.951 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:49.951 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:49.951 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.951 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.951 BaseBdev1_malloc 00:18:49.951 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.951 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:49.951 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.951 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.951 [2024-10-15 09:18:07.806369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:49.951 [2024-10-15 09:18:07.806437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.951 [2024-10-15 09:18:07.806464] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:49.951 [2024-10-15 09:18:07.806477] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.951 [2024-10-15 09:18:07.808558] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.951 [2024-10-15 09:18:07.808598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:49.951 BaseBdev1 00:18:49.951 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.951 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:49.951 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:49.951 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.951 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.210 BaseBdev2_malloc 00:18:50.210 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.210 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:50.210 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.210 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.210 [2024-10-15 09:18:07.861255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:50.210 [2024-10-15 09:18:07.861371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.210 [2024-10-15 09:18:07.861399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:50.210 [2024-10-15 09:18:07.861411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.210 [2024-10-15 09:18:07.863515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.210 [2024-10-15 09:18:07.863554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:50.210 BaseBdev2 00:18:50.210 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.210 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:50.210 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.210 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.210 spare_malloc 00:18:50.210 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.210 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:50.210 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.210 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.210 spare_delay 00:18:50.210 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.210 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:50.210 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.210 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.210 [2024-10-15 09:18:07.952574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:50.210 [2024-10-15 09:18:07.952637] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.210 [2024-10-15 09:18:07.952662] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:50.210 [2024-10-15 09:18:07.952672] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.210 [2024-10-15 09:18:07.954670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.211 [2024-10-15 09:18:07.954718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:50.211 spare 00:18:50.211 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.211 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:50.211 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.211 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.211 [2024-10-15 09:18:07.964594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:50.211 [2024-10-15 09:18:07.966590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:50.211 [2024-10-15 09:18:07.966834] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:50.211 [2024-10-15 09:18:07.966853] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:50.211 [2024-10-15 09:18:07.966951] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:50.211 [2024-10-15 09:18:07.967039] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:50.211 [2024-10-15 09:18:07.967048] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:50.211 [2024-10-15 09:18:07.967129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:50.211 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.211 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:50.211 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:50.211 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:50.211 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:50.211 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:50.211 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:50.211 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.211 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.211 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.211 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.211 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.211 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.211 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.211 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.211 09:18:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.211 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.211 "name": "raid_bdev1", 00:18:50.211 "uuid": "11d75f67-4a44-4ad9-81b4-640429d30bb6", 00:18:50.211 "strip_size_kb": 0, 00:18:50.211 "state": "online", 00:18:50.211 "raid_level": "raid1", 00:18:50.211 "superblock": true, 00:18:50.211 "num_base_bdevs": 2, 00:18:50.211 "num_base_bdevs_discovered": 2, 00:18:50.211 "num_base_bdevs_operational": 2, 00:18:50.211 "base_bdevs_list": [ 00:18:50.211 { 00:18:50.211 "name": "BaseBdev1", 00:18:50.211 "uuid": "00ba25af-927f-5a56-8e8c-0690b78551ef", 00:18:50.211 "is_configured": true, 00:18:50.211 "data_offset": 256, 00:18:50.211 "data_size": 7936 00:18:50.211 }, 00:18:50.211 { 00:18:50.211 "name": "BaseBdev2", 00:18:50.211 "uuid": "620726fe-d4af-521f-af7d-78add86c80cf", 00:18:50.211 "is_configured": true, 00:18:50.211 "data_offset": 256, 00:18:50.211 "data_size": 7936 00:18:50.211 } 00:18:50.211 ] 00:18:50.211 }' 00:18:50.211 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.211 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.780 [2024-10-15 09:18:08.376183] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.780 [2024-10-15 09:18:08.487741] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.780 "name": "raid_bdev1", 00:18:50.780 "uuid": "11d75f67-4a44-4ad9-81b4-640429d30bb6", 00:18:50.780 "strip_size_kb": 0, 00:18:50.780 "state": "online", 00:18:50.780 "raid_level": "raid1", 00:18:50.780 "superblock": true, 00:18:50.780 "num_base_bdevs": 2, 00:18:50.780 "num_base_bdevs_discovered": 1, 00:18:50.780 "num_base_bdevs_operational": 1, 00:18:50.780 "base_bdevs_list": [ 00:18:50.780 { 00:18:50.780 "name": null, 00:18:50.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.780 "is_configured": false, 00:18:50.780 "data_offset": 0, 00:18:50.780 "data_size": 7936 00:18:50.780 }, 00:18:50.780 { 00:18:50.780 "name": "BaseBdev2", 00:18:50.780 "uuid": "620726fe-d4af-521f-af7d-78add86c80cf", 00:18:50.780 "is_configured": true, 00:18:50.780 "data_offset": 256, 00:18:50.780 "data_size": 7936 00:18:50.780 } 00:18:50.780 ] 00:18:50.780 }' 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.780 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.349 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:51.349 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.349 09:18:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.349 [2024-10-15 09:18:08.994946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:51.349 [2024-10-15 09:18:09.013667] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:51.349 09:18:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.349 09:18:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:51.349 [2024-10-15 09:18:09.015843] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:52.324 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:52.324 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.324 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:52.324 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:52.324 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.324 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.324 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.324 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.324 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:52.324 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.324 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.324 "name": "raid_bdev1", 00:18:52.324 "uuid": "11d75f67-4a44-4ad9-81b4-640429d30bb6", 00:18:52.324 "strip_size_kb": 0, 00:18:52.324 "state": "online", 00:18:52.324 "raid_level": "raid1", 00:18:52.324 "superblock": true, 00:18:52.324 "num_base_bdevs": 2, 00:18:52.324 "num_base_bdevs_discovered": 2, 00:18:52.324 "num_base_bdevs_operational": 2, 00:18:52.324 "process": { 00:18:52.324 "type": "rebuild", 00:18:52.324 "target": "spare", 00:18:52.324 "progress": { 00:18:52.324 "blocks": 2560, 00:18:52.324 "percent": 32 00:18:52.324 } 00:18:52.324 }, 00:18:52.324 "base_bdevs_list": [ 00:18:52.324 { 00:18:52.324 "name": "spare", 00:18:52.324 "uuid": "dbaffe77-12f0-5936-a749-ff1ae72f1e1a", 00:18:52.324 "is_configured": true, 00:18:52.324 "data_offset": 256, 00:18:52.324 "data_size": 7936 00:18:52.324 }, 00:18:52.324 { 00:18:52.324 "name": "BaseBdev2", 00:18:52.324 "uuid": "620726fe-d4af-521f-af7d-78add86c80cf", 00:18:52.324 "is_configured": true, 00:18:52.324 "data_offset": 256, 00:18:52.324 "data_size": 7936 00:18:52.324 } 00:18:52.324 ] 00:18:52.324 }' 00:18:52.324 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.324 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:52.324 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.324 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:52.324 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:52.324 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.324 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:52.324 [2024-10-15 09:18:10.155606] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:52.583 [2024-10-15 09:18:10.222003] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:52.583 [2024-10-15 09:18:10.222090] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:52.583 [2024-10-15 09:18:10.222124] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:52.583 [2024-10-15 09:18:10.222139] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:52.583 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.583 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:52.583 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:52.583 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:52.583 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:52.583 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:52.583 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:52.583 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.583 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.583 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.583 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.583 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.583 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.583 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.583 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:52.583 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.583 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.583 "name": "raid_bdev1", 00:18:52.583 "uuid": "11d75f67-4a44-4ad9-81b4-640429d30bb6", 00:18:52.583 "strip_size_kb": 0, 00:18:52.583 "state": "online", 00:18:52.583 "raid_level": "raid1", 00:18:52.583 "superblock": true, 00:18:52.583 "num_base_bdevs": 2, 00:18:52.583 "num_base_bdevs_discovered": 1, 00:18:52.583 "num_base_bdevs_operational": 1, 00:18:52.583 "base_bdevs_list": [ 00:18:52.583 { 00:18:52.583 "name": null, 00:18:52.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.583 "is_configured": false, 00:18:52.583 "data_offset": 0, 00:18:52.583 "data_size": 7936 00:18:52.583 }, 00:18:52.583 { 00:18:52.583 "name": "BaseBdev2", 00:18:52.583 "uuid": "620726fe-d4af-521f-af7d-78add86c80cf", 00:18:52.583 "is_configured": true, 00:18:52.583 "data_offset": 256, 00:18:52.583 "data_size": 7936 00:18:52.583 } 00:18:52.583 ] 00:18:52.583 }' 00:18:52.583 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.583 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:52.842 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:52.842 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.842 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:52.842 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:52.842 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.842 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.842 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.842 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.842 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:52.842 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.842 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.842 "name": "raid_bdev1", 00:18:52.842 "uuid": "11d75f67-4a44-4ad9-81b4-640429d30bb6", 00:18:52.842 "strip_size_kb": 0, 00:18:52.842 "state": "online", 00:18:52.842 "raid_level": "raid1", 00:18:52.842 "superblock": true, 00:18:52.842 "num_base_bdevs": 2, 00:18:52.842 "num_base_bdevs_discovered": 1, 00:18:52.842 "num_base_bdevs_operational": 1, 00:18:52.842 "base_bdevs_list": [ 00:18:52.842 { 00:18:52.842 "name": null, 00:18:52.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.842 "is_configured": false, 00:18:52.842 "data_offset": 0, 00:18:52.842 "data_size": 7936 00:18:52.842 }, 00:18:52.842 { 00:18:52.842 "name": "BaseBdev2", 00:18:52.842 "uuid": "620726fe-d4af-521f-af7d-78add86c80cf", 00:18:52.842 "is_configured": true, 00:18:52.842 "data_offset": 256, 00:18:52.842 "data_size": 7936 00:18:52.842 } 00:18:52.842 ] 00:18:52.842 }' 00:18:52.842 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.101 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:53.101 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.101 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:53.101 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:53.101 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.101 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.101 [2024-10-15 09:18:10.833273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:53.101 [2024-10-15 09:18:10.851000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:53.101 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.101 09:18:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:53.101 [2024-10-15 09:18:10.853019] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:54.040 09:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:54.040 09:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.040 09:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:54.040 09:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:54.040 09:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.040 09:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.040 09:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.040 09:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.040 09:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.040 09:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.040 09:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.040 "name": "raid_bdev1", 00:18:54.040 "uuid": "11d75f67-4a44-4ad9-81b4-640429d30bb6", 00:18:54.040 "strip_size_kb": 0, 00:18:54.040 "state": "online", 00:18:54.040 "raid_level": "raid1", 00:18:54.040 "superblock": true, 00:18:54.040 "num_base_bdevs": 2, 00:18:54.040 "num_base_bdevs_discovered": 2, 00:18:54.040 "num_base_bdevs_operational": 2, 00:18:54.040 "process": { 00:18:54.040 "type": "rebuild", 00:18:54.040 "target": "spare", 00:18:54.040 "progress": { 00:18:54.040 "blocks": 2560, 00:18:54.040 "percent": 32 00:18:54.040 } 00:18:54.040 }, 00:18:54.040 "base_bdevs_list": [ 00:18:54.040 { 00:18:54.040 "name": "spare", 00:18:54.040 "uuid": "dbaffe77-12f0-5936-a749-ff1ae72f1e1a", 00:18:54.040 "is_configured": true, 00:18:54.040 "data_offset": 256, 00:18:54.040 "data_size": 7936 00:18:54.040 }, 00:18:54.040 { 00:18:54.040 "name": "BaseBdev2", 00:18:54.040 "uuid": "620726fe-d4af-521f-af7d-78add86c80cf", 00:18:54.040 "is_configured": true, 00:18:54.040 "data_offset": 256, 00:18:54.040 "data_size": 7936 00:18:54.040 } 00:18:54.040 ] 00:18:54.040 }' 00:18:54.040 09:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.300 09:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:54.300 09:18:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.300 09:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:54.300 09:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:54.300 09:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:54.300 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:54.300 09:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:54.300 09:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:54.300 09:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:54.300 09:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=776 00:18:54.300 09:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:54.300 09:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:54.300 09:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.300 09:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:54.300 09:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:54.300 09:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.300 09:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.300 09:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.300 09:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.300 09:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.300 09:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.300 09:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.300 "name": "raid_bdev1", 00:18:54.300 "uuid": "11d75f67-4a44-4ad9-81b4-640429d30bb6", 00:18:54.300 "strip_size_kb": 0, 00:18:54.300 "state": "online", 00:18:54.300 "raid_level": "raid1", 00:18:54.300 "superblock": true, 00:18:54.300 "num_base_bdevs": 2, 00:18:54.300 "num_base_bdevs_discovered": 2, 00:18:54.300 "num_base_bdevs_operational": 2, 00:18:54.300 "process": { 00:18:54.300 "type": "rebuild", 00:18:54.300 "target": "spare", 00:18:54.300 "progress": { 00:18:54.300 "blocks": 2816, 00:18:54.300 "percent": 35 00:18:54.300 } 00:18:54.300 }, 00:18:54.300 "base_bdevs_list": [ 00:18:54.300 { 00:18:54.300 "name": "spare", 00:18:54.300 "uuid": "dbaffe77-12f0-5936-a749-ff1ae72f1e1a", 00:18:54.300 "is_configured": true, 00:18:54.300 "data_offset": 256, 00:18:54.300 "data_size": 7936 00:18:54.300 }, 00:18:54.300 { 00:18:54.300 "name": "BaseBdev2", 00:18:54.300 "uuid": "620726fe-d4af-521f-af7d-78add86c80cf", 00:18:54.300 "is_configured": true, 00:18:54.300 "data_offset": 256, 00:18:54.300 "data_size": 7936 00:18:54.300 } 00:18:54.300 ] 00:18:54.300 }' 00:18:54.300 09:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.300 09:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:54.300 09:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.300 09:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:54.300 09:18:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:55.681 09:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:55.681 09:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:55.681 09:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:55.681 09:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:55.681 09:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:55.681 09:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:55.681 09:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.681 09:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.681 09:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.681 09:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.681 09:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.681 09:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:55.681 "name": "raid_bdev1", 00:18:55.681 "uuid": "11d75f67-4a44-4ad9-81b4-640429d30bb6", 00:18:55.681 "strip_size_kb": 0, 00:18:55.681 "state": "online", 00:18:55.681 "raid_level": "raid1", 00:18:55.681 "superblock": true, 00:18:55.681 "num_base_bdevs": 2, 00:18:55.681 "num_base_bdevs_discovered": 2, 00:18:55.681 "num_base_bdevs_operational": 2, 00:18:55.681 "process": { 00:18:55.681 "type": "rebuild", 00:18:55.681 "target": "spare", 00:18:55.681 "progress": { 00:18:55.681 "blocks": 5888, 00:18:55.681 "percent": 74 00:18:55.681 } 00:18:55.681 }, 00:18:55.681 "base_bdevs_list": [ 00:18:55.681 { 00:18:55.681 "name": "spare", 00:18:55.681 "uuid": "dbaffe77-12f0-5936-a749-ff1ae72f1e1a", 00:18:55.681 "is_configured": true, 00:18:55.681 "data_offset": 256, 00:18:55.681 "data_size": 7936 00:18:55.681 }, 00:18:55.681 { 00:18:55.681 "name": "BaseBdev2", 00:18:55.681 "uuid": "620726fe-d4af-521f-af7d-78add86c80cf", 00:18:55.681 "is_configured": true, 00:18:55.681 "data_offset": 256, 00:18:55.681 "data_size": 7936 00:18:55.681 } 00:18:55.681 ] 00:18:55.681 }' 00:18:55.681 09:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:55.681 09:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:55.681 09:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:55.681 09:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:55.681 09:18:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:56.253 [2024-10-15 09:18:13.968745] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:56.253 [2024-10-15 09:18:13.968933] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:56.253 [2024-10-15 09:18:13.969094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.513 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:56.513 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:56.513 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:56.513 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:56.514 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:56.514 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:56.514 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.514 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.514 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.514 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.514 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.514 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:56.514 "name": "raid_bdev1", 00:18:56.514 "uuid": "11d75f67-4a44-4ad9-81b4-640429d30bb6", 00:18:56.514 "strip_size_kb": 0, 00:18:56.514 "state": "online", 00:18:56.514 "raid_level": "raid1", 00:18:56.514 "superblock": true, 00:18:56.514 "num_base_bdevs": 2, 00:18:56.514 "num_base_bdevs_discovered": 2, 00:18:56.514 "num_base_bdevs_operational": 2, 00:18:56.514 "base_bdevs_list": [ 00:18:56.514 { 00:18:56.514 "name": "spare", 00:18:56.514 "uuid": "dbaffe77-12f0-5936-a749-ff1ae72f1e1a", 00:18:56.514 "is_configured": true, 00:18:56.514 "data_offset": 256, 00:18:56.514 "data_size": 7936 00:18:56.514 }, 00:18:56.514 { 00:18:56.514 "name": "BaseBdev2", 00:18:56.514 "uuid": "620726fe-d4af-521f-af7d-78add86c80cf", 00:18:56.514 "is_configured": true, 00:18:56.514 "data_offset": 256, 00:18:56.514 "data_size": 7936 00:18:56.514 } 00:18:56.514 ] 00:18:56.514 }' 00:18:56.514 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:56.514 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:56.514 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:56.514 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:56.514 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:18:56.514 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:56.514 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:56.514 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:56.514 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:56.514 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:56.514 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.514 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.514 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.514 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.514 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.772 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:56.772 "name": "raid_bdev1", 00:18:56.772 "uuid": "11d75f67-4a44-4ad9-81b4-640429d30bb6", 00:18:56.772 "strip_size_kb": 0, 00:18:56.772 "state": "online", 00:18:56.772 "raid_level": "raid1", 00:18:56.772 "superblock": true, 00:18:56.772 "num_base_bdevs": 2, 00:18:56.772 "num_base_bdevs_discovered": 2, 00:18:56.772 "num_base_bdevs_operational": 2, 00:18:56.772 "base_bdevs_list": [ 00:18:56.772 { 00:18:56.772 "name": "spare", 00:18:56.772 "uuid": "dbaffe77-12f0-5936-a749-ff1ae72f1e1a", 00:18:56.772 "is_configured": true, 00:18:56.772 "data_offset": 256, 00:18:56.772 "data_size": 7936 00:18:56.772 }, 00:18:56.773 { 00:18:56.773 "name": "BaseBdev2", 00:18:56.773 "uuid": "620726fe-d4af-521f-af7d-78add86c80cf", 00:18:56.773 "is_configured": true, 00:18:56.773 "data_offset": 256, 00:18:56.773 "data_size": 7936 00:18:56.773 } 00:18:56.773 ] 00:18:56.773 }' 00:18:56.773 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:56.773 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:56.773 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:56.773 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:56.773 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:56.773 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:56.773 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:56.773 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:56.773 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:56.773 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:56.773 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.773 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.773 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.773 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.773 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.773 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.773 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.773 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.773 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.773 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.773 "name": "raid_bdev1", 00:18:56.773 "uuid": "11d75f67-4a44-4ad9-81b4-640429d30bb6", 00:18:56.773 "strip_size_kb": 0, 00:18:56.773 "state": "online", 00:18:56.773 "raid_level": "raid1", 00:18:56.773 "superblock": true, 00:18:56.773 "num_base_bdevs": 2, 00:18:56.773 "num_base_bdevs_discovered": 2, 00:18:56.773 "num_base_bdevs_operational": 2, 00:18:56.773 "base_bdevs_list": [ 00:18:56.773 { 00:18:56.773 "name": "spare", 00:18:56.773 "uuid": "dbaffe77-12f0-5936-a749-ff1ae72f1e1a", 00:18:56.773 "is_configured": true, 00:18:56.773 "data_offset": 256, 00:18:56.773 "data_size": 7936 00:18:56.773 }, 00:18:56.773 { 00:18:56.773 "name": "BaseBdev2", 00:18:56.773 "uuid": "620726fe-d4af-521f-af7d-78add86c80cf", 00:18:56.773 "is_configured": true, 00:18:56.773 "data_offset": 256, 00:18:56.773 "data_size": 7936 00:18:56.773 } 00:18:56.773 ] 00:18:56.773 }' 00:18:56.773 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.773 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.342 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:57.343 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.343 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.343 [2024-10-15 09:18:14.980820] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:57.343 [2024-10-15 09:18:14.980858] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:57.343 [2024-10-15 09:18:14.980958] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:57.343 [2024-10-15 09:18:14.981033] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:57.343 [2024-10-15 09:18:14.981045] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:57.343 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.343 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.343 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:18:57.343 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.343 09:18:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.343 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.343 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:57.343 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:57.343 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:57.343 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:57.343 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.343 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.343 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.343 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:57.343 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.343 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.343 [2024-10-15 09:18:15.052674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:57.343 [2024-10-15 09:18:15.052800] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.343 [2024-10-15 09:18:15.052841] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:57.343 [2024-10-15 09:18:15.052871] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.343 [2024-10-15 09:18:15.054936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.343 [2024-10-15 09:18:15.055021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:57.343 [2024-10-15 09:18:15.055132] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:57.343 [2024-10-15 09:18:15.055231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:57.343 [2024-10-15 09:18:15.055391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:57.343 spare 00:18:57.343 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.343 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:57.343 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.343 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.343 [2024-10-15 09:18:15.155313] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:57.343 [2024-10-15 09:18:15.155471] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:57.343 [2024-10-15 09:18:15.155680] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:57.343 [2024-10-15 09:18:15.155830] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:57.343 [2024-10-15 09:18:15.155840] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:57.343 [2024-10-15 09:18:15.155964] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:57.343 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.343 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:57.343 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:57.343 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:57.343 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.343 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.343 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:57.343 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.343 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.343 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.343 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.343 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.343 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.343 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.343 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.343 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.343 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.343 "name": "raid_bdev1", 00:18:57.343 "uuid": "11d75f67-4a44-4ad9-81b4-640429d30bb6", 00:18:57.343 "strip_size_kb": 0, 00:18:57.343 "state": "online", 00:18:57.343 "raid_level": "raid1", 00:18:57.343 "superblock": true, 00:18:57.343 "num_base_bdevs": 2, 00:18:57.343 "num_base_bdevs_discovered": 2, 00:18:57.343 "num_base_bdevs_operational": 2, 00:18:57.343 "base_bdevs_list": [ 00:18:57.343 { 00:18:57.343 "name": "spare", 00:18:57.343 "uuid": "dbaffe77-12f0-5936-a749-ff1ae72f1e1a", 00:18:57.343 "is_configured": true, 00:18:57.343 "data_offset": 256, 00:18:57.343 "data_size": 7936 00:18:57.343 }, 00:18:57.343 { 00:18:57.343 "name": "BaseBdev2", 00:18:57.343 "uuid": "620726fe-d4af-521f-af7d-78add86c80cf", 00:18:57.343 "is_configured": true, 00:18:57.343 "data_offset": 256, 00:18:57.343 "data_size": 7936 00:18:57.343 } 00:18:57.343 ] 00:18:57.343 }' 00:18:57.343 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.343 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.946 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:57.946 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:57.946 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:57.946 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:57.946 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:57.946 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.946 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.946 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.946 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.947 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.947 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.947 "name": "raid_bdev1", 00:18:57.947 "uuid": "11d75f67-4a44-4ad9-81b4-640429d30bb6", 00:18:57.947 "strip_size_kb": 0, 00:18:57.947 "state": "online", 00:18:57.947 "raid_level": "raid1", 00:18:57.947 "superblock": true, 00:18:57.947 "num_base_bdevs": 2, 00:18:57.947 "num_base_bdevs_discovered": 2, 00:18:57.947 "num_base_bdevs_operational": 2, 00:18:57.947 "base_bdevs_list": [ 00:18:57.947 { 00:18:57.947 "name": "spare", 00:18:57.947 "uuid": "dbaffe77-12f0-5936-a749-ff1ae72f1e1a", 00:18:57.947 "is_configured": true, 00:18:57.947 "data_offset": 256, 00:18:57.947 "data_size": 7936 00:18:57.947 }, 00:18:57.947 { 00:18:57.947 "name": "BaseBdev2", 00:18:57.947 "uuid": "620726fe-d4af-521f-af7d-78add86c80cf", 00:18:57.947 "is_configured": true, 00:18:57.947 "data_offset": 256, 00:18:57.947 "data_size": 7936 00:18:57.947 } 00:18:57.947 ] 00:18:57.947 }' 00:18:57.947 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:57.947 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:57.947 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.947 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:57.947 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.947 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:57.947 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.947 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.947 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.947 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:57.947 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:57.947 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.947 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.947 [2024-10-15 09:18:15.819493] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:57.947 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.947 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:57.947 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:57.947 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:57.947 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.947 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.947 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:57.947 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.947 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.947 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.947 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.947 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.947 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.947 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.947 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.205 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.205 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.205 "name": "raid_bdev1", 00:18:58.205 "uuid": "11d75f67-4a44-4ad9-81b4-640429d30bb6", 00:18:58.205 "strip_size_kb": 0, 00:18:58.205 "state": "online", 00:18:58.205 "raid_level": "raid1", 00:18:58.205 "superblock": true, 00:18:58.205 "num_base_bdevs": 2, 00:18:58.205 "num_base_bdevs_discovered": 1, 00:18:58.205 "num_base_bdevs_operational": 1, 00:18:58.205 "base_bdevs_list": [ 00:18:58.205 { 00:18:58.205 "name": null, 00:18:58.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.205 "is_configured": false, 00:18:58.205 "data_offset": 0, 00:18:58.205 "data_size": 7936 00:18:58.205 }, 00:18:58.205 { 00:18:58.205 "name": "BaseBdev2", 00:18:58.205 "uuid": "620726fe-d4af-521f-af7d-78add86c80cf", 00:18:58.205 "is_configured": true, 00:18:58.205 "data_offset": 256, 00:18:58.205 "data_size": 7936 00:18:58.205 } 00:18:58.205 ] 00:18:58.205 }' 00:18:58.205 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.205 09:18:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.464 09:18:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:58.464 09:18:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.464 09:18:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.465 [2024-10-15 09:18:16.282773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:58.465 [2024-10-15 09:18:16.283080] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:58.465 [2024-10-15 09:18:16.283154] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:58.465 [2024-10-15 09:18:16.283227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:58.465 [2024-10-15 09:18:16.300197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:58.465 09:18:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.465 09:18:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:58.465 [2024-10-15 09:18:16.302328] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:59.843 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:59.843 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:59.843 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:59.843 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:59.843 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:59.843 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.843 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.843 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.843 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.843 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.843 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:59.843 "name": "raid_bdev1", 00:18:59.844 "uuid": "11d75f67-4a44-4ad9-81b4-640429d30bb6", 00:18:59.844 "strip_size_kb": 0, 00:18:59.844 "state": "online", 00:18:59.844 "raid_level": "raid1", 00:18:59.844 "superblock": true, 00:18:59.844 "num_base_bdevs": 2, 00:18:59.844 "num_base_bdevs_discovered": 2, 00:18:59.844 "num_base_bdevs_operational": 2, 00:18:59.844 "process": { 00:18:59.844 "type": "rebuild", 00:18:59.844 "target": "spare", 00:18:59.844 "progress": { 00:18:59.844 "blocks": 2560, 00:18:59.844 "percent": 32 00:18:59.844 } 00:18:59.844 }, 00:18:59.844 "base_bdevs_list": [ 00:18:59.844 { 00:18:59.844 "name": "spare", 00:18:59.844 "uuid": "dbaffe77-12f0-5936-a749-ff1ae72f1e1a", 00:18:59.844 "is_configured": true, 00:18:59.844 "data_offset": 256, 00:18:59.844 "data_size": 7936 00:18:59.844 }, 00:18:59.844 { 00:18:59.844 "name": "BaseBdev2", 00:18:59.844 "uuid": "620726fe-d4af-521f-af7d-78add86c80cf", 00:18:59.844 "is_configured": true, 00:18:59.844 "data_offset": 256, 00:18:59.844 "data_size": 7936 00:18:59.844 } 00:18:59.844 ] 00:18:59.844 }' 00:18:59.844 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:59.844 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:59.844 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:59.844 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:59.844 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:59.844 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.844 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.844 [2024-10-15 09:18:17.466083] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:59.844 [2024-10-15 09:18:17.508501] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:59.844 [2024-10-15 09:18:17.508607] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:59.844 [2024-10-15 09:18:17.508627] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:59.844 [2024-10-15 09:18:17.508638] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:59.844 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.844 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:59.844 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:59.844 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:59.844 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:59.844 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:59.844 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:59.844 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.844 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.844 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.844 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.844 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.844 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.844 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.844 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.844 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.844 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.844 "name": "raid_bdev1", 00:18:59.844 "uuid": "11d75f67-4a44-4ad9-81b4-640429d30bb6", 00:18:59.844 "strip_size_kb": 0, 00:18:59.844 "state": "online", 00:18:59.844 "raid_level": "raid1", 00:18:59.844 "superblock": true, 00:18:59.844 "num_base_bdevs": 2, 00:18:59.844 "num_base_bdevs_discovered": 1, 00:18:59.844 "num_base_bdevs_operational": 1, 00:18:59.844 "base_bdevs_list": [ 00:18:59.844 { 00:18:59.844 "name": null, 00:18:59.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.844 "is_configured": false, 00:18:59.844 "data_offset": 0, 00:18:59.844 "data_size": 7936 00:18:59.844 }, 00:18:59.844 { 00:18:59.844 "name": "BaseBdev2", 00:18:59.844 "uuid": "620726fe-d4af-521f-af7d-78add86c80cf", 00:18:59.844 "is_configured": true, 00:18:59.844 "data_offset": 256, 00:18:59.844 "data_size": 7936 00:18:59.844 } 00:18:59.844 ] 00:18:59.844 }' 00:18:59.844 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.844 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.104 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:00.104 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.104 09:18:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.104 [2024-10-15 09:18:17.984078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:00.104 [2024-10-15 09:18:17.984224] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.104 [2024-10-15 09:18:17.984270] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:00.104 [2024-10-15 09:18:17.984306] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.104 [2024-10-15 09:18:17.984583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.104 [2024-10-15 09:18:17.984636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:00.104 [2024-10-15 09:18:17.984737] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:00.104 [2024-10-15 09:18:17.984778] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:00.104 [2024-10-15 09:18:17.984823] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:00.104 [2024-10-15 09:18:17.984875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:00.364 [2024-10-15 09:18:18.001479] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:00.364 spare 00:19:00.364 09:18:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.364 [2024-10-15 09:18:18.003459] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:00.364 09:18:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:01.303 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:01.304 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:01.304 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:01.304 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:01.304 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:01.304 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.304 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.304 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.304 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.304 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.304 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:01.304 "name": "raid_bdev1", 00:19:01.304 "uuid": "11d75f67-4a44-4ad9-81b4-640429d30bb6", 00:19:01.304 "strip_size_kb": 0, 00:19:01.304 "state": "online", 00:19:01.304 "raid_level": "raid1", 00:19:01.304 "superblock": true, 00:19:01.304 "num_base_bdevs": 2, 00:19:01.304 "num_base_bdevs_discovered": 2, 00:19:01.304 "num_base_bdevs_operational": 2, 00:19:01.304 "process": { 00:19:01.304 "type": "rebuild", 00:19:01.304 "target": "spare", 00:19:01.304 "progress": { 00:19:01.304 "blocks": 2560, 00:19:01.304 "percent": 32 00:19:01.304 } 00:19:01.304 }, 00:19:01.304 "base_bdevs_list": [ 00:19:01.304 { 00:19:01.304 "name": "spare", 00:19:01.304 "uuid": "dbaffe77-12f0-5936-a749-ff1ae72f1e1a", 00:19:01.304 "is_configured": true, 00:19:01.304 "data_offset": 256, 00:19:01.304 "data_size": 7936 00:19:01.304 }, 00:19:01.304 { 00:19:01.304 "name": "BaseBdev2", 00:19:01.304 "uuid": "620726fe-d4af-521f-af7d-78add86c80cf", 00:19:01.304 "is_configured": true, 00:19:01.304 "data_offset": 256, 00:19:01.304 "data_size": 7936 00:19:01.304 } 00:19:01.304 ] 00:19:01.304 }' 00:19:01.304 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:01.304 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:01.304 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:01.304 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:01.304 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:01.304 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.304 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.304 [2024-10-15 09:18:19.151186] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:01.564 [2024-10-15 09:18:19.209600] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:01.564 [2024-10-15 09:18:19.209716] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:01.564 [2024-10-15 09:18:19.209737] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:01.564 [2024-10-15 09:18:19.209746] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:01.564 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.564 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:01.564 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:01.564 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.564 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:01.564 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:01.564 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:01.564 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.564 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.564 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.564 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.564 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.564 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.564 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.564 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.564 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.564 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.564 "name": "raid_bdev1", 00:19:01.564 "uuid": "11d75f67-4a44-4ad9-81b4-640429d30bb6", 00:19:01.564 "strip_size_kb": 0, 00:19:01.564 "state": "online", 00:19:01.564 "raid_level": "raid1", 00:19:01.564 "superblock": true, 00:19:01.564 "num_base_bdevs": 2, 00:19:01.564 "num_base_bdevs_discovered": 1, 00:19:01.564 "num_base_bdevs_operational": 1, 00:19:01.564 "base_bdevs_list": [ 00:19:01.564 { 00:19:01.564 "name": null, 00:19:01.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.564 "is_configured": false, 00:19:01.564 "data_offset": 0, 00:19:01.564 "data_size": 7936 00:19:01.564 }, 00:19:01.564 { 00:19:01.564 "name": "BaseBdev2", 00:19:01.564 "uuid": "620726fe-d4af-521f-af7d-78add86c80cf", 00:19:01.564 "is_configured": true, 00:19:01.564 "data_offset": 256, 00:19:01.564 "data_size": 7936 00:19:01.564 } 00:19:01.564 ] 00:19:01.564 }' 00:19:01.564 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.564 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.132 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:02.132 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:02.132 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:02.132 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:02.132 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:02.132 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.132 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.132 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.132 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.133 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.133 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:02.133 "name": "raid_bdev1", 00:19:02.133 "uuid": "11d75f67-4a44-4ad9-81b4-640429d30bb6", 00:19:02.133 "strip_size_kb": 0, 00:19:02.133 "state": "online", 00:19:02.133 "raid_level": "raid1", 00:19:02.133 "superblock": true, 00:19:02.133 "num_base_bdevs": 2, 00:19:02.133 "num_base_bdevs_discovered": 1, 00:19:02.133 "num_base_bdevs_operational": 1, 00:19:02.133 "base_bdevs_list": [ 00:19:02.133 { 00:19:02.133 "name": null, 00:19:02.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.133 "is_configured": false, 00:19:02.133 "data_offset": 0, 00:19:02.133 "data_size": 7936 00:19:02.133 }, 00:19:02.133 { 00:19:02.133 "name": "BaseBdev2", 00:19:02.133 "uuid": "620726fe-d4af-521f-af7d-78add86c80cf", 00:19:02.133 "is_configured": true, 00:19:02.133 "data_offset": 256, 00:19:02.133 "data_size": 7936 00:19:02.133 } 00:19:02.133 ] 00:19:02.133 }' 00:19:02.133 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:02.133 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:02.133 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:02.133 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:02.133 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:02.133 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.133 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.133 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.133 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:02.133 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.133 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.133 [2024-10-15 09:18:19.917463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:02.133 [2024-10-15 09:18:19.917546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.133 [2024-10-15 09:18:19.917577] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:02.133 [2024-10-15 09:18:19.917590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.133 [2024-10-15 09:18:19.917823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.133 [2024-10-15 09:18:19.917838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:02.133 [2024-10-15 09:18:19.917904] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:02.133 [2024-10-15 09:18:19.917923] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:02.133 [2024-10-15 09:18:19.917933] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:02.133 [2024-10-15 09:18:19.917945] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:02.133 BaseBdev1 00:19:02.133 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.133 09:18:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:03.069 09:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:03.069 09:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.069 09:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:03.069 09:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:03.069 09:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:03.069 09:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:03.069 09:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.069 09:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.069 09:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.069 09:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.069 09:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.069 09:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.069 09:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.069 09:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.069 09:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.327 09:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.328 "name": "raid_bdev1", 00:19:03.328 "uuid": "11d75f67-4a44-4ad9-81b4-640429d30bb6", 00:19:03.328 "strip_size_kb": 0, 00:19:03.328 "state": "online", 00:19:03.328 "raid_level": "raid1", 00:19:03.328 "superblock": true, 00:19:03.328 "num_base_bdevs": 2, 00:19:03.328 "num_base_bdevs_discovered": 1, 00:19:03.328 "num_base_bdevs_operational": 1, 00:19:03.328 "base_bdevs_list": [ 00:19:03.328 { 00:19:03.328 "name": null, 00:19:03.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.328 "is_configured": false, 00:19:03.328 "data_offset": 0, 00:19:03.328 "data_size": 7936 00:19:03.328 }, 00:19:03.328 { 00:19:03.328 "name": "BaseBdev2", 00:19:03.328 "uuid": "620726fe-d4af-521f-af7d-78add86c80cf", 00:19:03.328 "is_configured": true, 00:19:03.328 "data_offset": 256, 00:19:03.328 "data_size": 7936 00:19:03.328 } 00:19:03.328 ] 00:19:03.328 }' 00:19:03.328 09:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.328 09:18:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.586 09:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:03.586 09:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:03.586 09:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:03.586 09:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:03.586 09:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:03.586 09:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.586 09:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.586 09:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.586 09:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.586 09:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.844 09:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:03.844 "name": "raid_bdev1", 00:19:03.844 "uuid": "11d75f67-4a44-4ad9-81b4-640429d30bb6", 00:19:03.844 "strip_size_kb": 0, 00:19:03.844 "state": "online", 00:19:03.844 "raid_level": "raid1", 00:19:03.844 "superblock": true, 00:19:03.844 "num_base_bdevs": 2, 00:19:03.844 "num_base_bdevs_discovered": 1, 00:19:03.844 "num_base_bdevs_operational": 1, 00:19:03.844 "base_bdevs_list": [ 00:19:03.844 { 00:19:03.844 "name": null, 00:19:03.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.844 "is_configured": false, 00:19:03.844 "data_offset": 0, 00:19:03.844 "data_size": 7936 00:19:03.844 }, 00:19:03.844 { 00:19:03.844 "name": "BaseBdev2", 00:19:03.844 "uuid": "620726fe-d4af-521f-af7d-78add86c80cf", 00:19:03.844 "is_configured": true, 00:19:03.844 "data_offset": 256, 00:19:03.844 "data_size": 7936 00:19:03.844 } 00:19:03.844 ] 00:19:03.844 }' 00:19:03.844 09:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:03.844 09:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:03.844 09:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:03.844 09:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:03.844 09:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:03.844 09:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:19:03.844 09:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:03.844 09:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:03.844 09:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:03.845 09:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:03.845 09:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:03.845 09:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:03.845 09:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.845 09:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.845 [2024-10-15 09:18:21.570777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:03.845 [2024-10-15 09:18:21.571042] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:03.845 [2024-10-15 09:18:21.571068] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:03.845 request: 00:19:03.845 { 00:19:03.845 "base_bdev": "BaseBdev1", 00:19:03.845 "raid_bdev": "raid_bdev1", 00:19:03.845 "method": "bdev_raid_add_base_bdev", 00:19:03.845 "req_id": 1 00:19:03.845 } 00:19:03.845 Got JSON-RPC error response 00:19:03.845 response: 00:19:03.845 { 00:19:03.845 "code": -22, 00:19:03.845 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:03.845 } 00:19:03.845 09:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:03.845 09:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:19:03.845 09:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:03.845 09:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:03.845 09:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:03.845 09:18:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:04.780 09:18:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:04.780 09:18:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:04.780 09:18:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:04.780 09:18:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:04.780 09:18:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:04.780 09:18:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:04.780 09:18:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.780 09:18:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.780 09:18:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.780 09:18:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.780 09:18:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.780 09:18:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.780 09:18:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.780 09:18:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.780 09:18:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.780 09:18:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.780 "name": "raid_bdev1", 00:19:04.780 "uuid": "11d75f67-4a44-4ad9-81b4-640429d30bb6", 00:19:04.780 "strip_size_kb": 0, 00:19:04.780 "state": "online", 00:19:04.780 "raid_level": "raid1", 00:19:04.780 "superblock": true, 00:19:04.780 "num_base_bdevs": 2, 00:19:04.780 "num_base_bdevs_discovered": 1, 00:19:04.780 "num_base_bdevs_operational": 1, 00:19:04.780 "base_bdevs_list": [ 00:19:04.780 { 00:19:04.780 "name": null, 00:19:04.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.780 "is_configured": false, 00:19:04.780 "data_offset": 0, 00:19:04.780 "data_size": 7936 00:19:04.780 }, 00:19:04.780 { 00:19:04.780 "name": "BaseBdev2", 00:19:04.780 "uuid": "620726fe-d4af-521f-af7d-78add86c80cf", 00:19:04.780 "is_configured": true, 00:19:04.780 "data_offset": 256, 00:19:04.780 "data_size": 7936 00:19:04.780 } 00:19:04.780 ] 00:19:04.780 }' 00:19:04.780 09:18:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.780 09:18:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.347 09:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:05.347 09:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:05.347 09:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:05.347 09:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:05.347 09:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:05.347 09:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.347 09:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.347 09:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.347 09:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.347 09:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.347 09:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:05.347 "name": "raid_bdev1", 00:19:05.347 "uuid": "11d75f67-4a44-4ad9-81b4-640429d30bb6", 00:19:05.347 "strip_size_kb": 0, 00:19:05.347 "state": "online", 00:19:05.347 "raid_level": "raid1", 00:19:05.347 "superblock": true, 00:19:05.347 "num_base_bdevs": 2, 00:19:05.347 "num_base_bdevs_discovered": 1, 00:19:05.347 "num_base_bdevs_operational": 1, 00:19:05.347 "base_bdevs_list": [ 00:19:05.347 { 00:19:05.347 "name": null, 00:19:05.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.347 "is_configured": false, 00:19:05.347 "data_offset": 0, 00:19:05.347 "data_size": 7936 00:19:05.347 }, 00:19:05.347 { 00:19:05.347 "name": "BaseBdev2", 00:19:05.347 "uuid": "620726fe-d4af-521f-af7d-78add86c80cf", 00:19:05.347 "is_configured": true, 00:19:05.347 "data_offset": 256, 00:19:05.347 "data_size": 7936 00:19:05.347 } 00:19:05.347 ] 00:19:05.347 }' 00:19:05.347 09:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:05.347 09:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:05.347 09:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:05.347 09:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:05.347 09:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89372 00:19:05.347 09:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 89372 ']' 00:19:05.347 09:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 89372 00:19:05.347 09:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:19:05.347 09:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:05.347 09:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89372 00:19:05.347 killing process with pid 89372 00:19:05.347 Received shutdown signal, test time was about 60.000000 seconds 00:19:05.347 00:19:05.347 Latency(us) 00:19:05.347 [2024-10-15T09:18:23.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.347 [2024-10-15T09:18:23.243Z] =================================================================================================================== 00:19:05.347 [2024-10-15T09:18:23.243Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:05.347 09:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:05.347 09:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:05.347 09:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89372' 00:19:05.347 09:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 89372 00:19:05.347 09:18:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 89372 00:19:05.347 [2024-10-15 09:18:23.218635] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:05.347 [2024-10-15 09:18:23.218795] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:05.347 [2024-10-15 09:18:23.218858] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:05.347 [2024-10-15 09:18:23.218891] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:05.915 [2024-10-15 09:18:23.566533] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:07.296 ************************************ 00:19:07.296 END TEST raid_rebuild_test_sb_md_interleaved 00:19:07.296 ************************************ 00:19:07.296 09:18:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:19:07.296 00:19:07.296 real 0m17.977s 00:19:07.296 user 0m23.627s 00:19:07.296 sys 0m1.703s 00:19:07.296 09:18:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:07.296 09:18:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.296 09:18:24 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:19:07.296 09:18:24 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:19:07.296 09:18:24 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89372 ']' 00:19:07.296 09:18:24 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89372 00:19:07.296 09:18:24 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:19:07.296 00:19:07.296 real 12m38.764s 00:19:07.296 user 17m3.979s 00:19:07.296 sys 1m59.689s 00:19:07.296 09:18:24 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:07.296 09:18:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:07.296 ************************************ 00:19:07.296 END TEST bdev_raid 00:19:07.296 ************************************ 00:19:07.297 09:18:24 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:07.297 09:18:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:07.297 09:18:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:07.297 09:18:24 -- common/autotest_common.sh@10 -- # set +x 00:19:07.297 ************************************ 00:19:07.297 START TEST spdkcli_raid 00:19:07.297 ************************************ 00:19:07.297 09:18:24 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:07.297 * Looking for test storage... 00:19:07.297 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:07.297 09:18:25 spdkcli_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:07.297 09:18:25 spdkcli_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:19:07.297 09:18:25 spdkcli_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:07.297 09:18:25 spdkcli_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:07.297 09:18:25 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:07.297 09:18:25 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:07.297 09:18:25 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:07.297 09:18:25 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:19:07.297 09:18:25 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:19:07.297 09:18:25 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:19:07.297 09:18:25 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:19:07.297 09:18:25 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:19:07.297 09:18:25 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:19:07.297 09:18:25 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:19:07.297 09:18:25 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:07.297 09:18:25 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:19:07.297 09:18:25 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:19:07.297 09:18:25 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:07.297 09:18:25 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:07.297 09:18:25 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:19:07.297 09:18:25 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:19:07.297 09:18:25 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:07.297 09:18:25 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:19:07.297 09:18:25 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:07.297 09:18:25 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:19:07.297 09:18:25 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:19:07.297 09:18:25 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:07.297 09:18:25 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:19:07.297 09:18:25 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:07.298 09:18:25 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:07.298 09:18:25 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:07.298 09:18:25 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:19:07.298 09:18:25 spdkcli_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:07.298 09:18:25 spdkcli_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:07.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.298 --rc genhtml_branch_coverage=1 00:19:07.298 --rc genhtml_function_coverage=1 00:19:07.298 --rc genhtml_legend=1 00:19:07.298 --rc geninfo_all_blocks=1 00:19:07.298 --rc geninfo_unexecuted_blocks=1 00:19:07.298 00:19:07.298 ' 00:19:07.298 09:18:25 spdkcli_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:07.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.298 --rc genhtml_branch_coverage=1 00:19:07.298 --rc genhtml_function_coverage=1 00:19:07.298 --rc genhtml_legend=1 00:19:07.298 --rc geninfo_all_blocks=1 00:19:07.298 --rc geninfo_unexecuted_blocks=1 00:19:07.298 00:19:07.298 ' 00:19:07.298 09:18:25 spdkcli_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:07.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.298 --rc genhtml_branch_coverage=1 00:19:07.298 --rc genhtml_function_coverage=1 00:19:07.298 --rc genhtml_legend=1 00:19:07.298 --rc geninfo_all_blocks=1 00:19:07.298 --rc geninfo_unexecuted_blocks=1 00:19:07.298 00:19:07.298 ' 00:19:07.298 09:18:25 spdkcli_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:07.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.298 --rc genhtml_branch_coverage=1 00:19:07.298 --rc genhtml_function_coverage=1 00:19:07.298 --rc genhtml_legend=1 00:19:07.298 --rc geninfo_all_blocks=1 00:19:07.298 --rc geninfo_unexecuted_blocks=1 00:19:07.298 00:19:07.298 ' 00:19:07.299 09:18:25 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:07.299 09:18:25 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:07.299 09:18:25 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:07.299 09:18:25 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:07.299 09:18:25 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:07.299 09:18:25 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:07.299 09:18:25 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:07.299 09:18:25 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:07.299 09:18:25 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:07.299 09:18:25 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:07.299 09:18:25 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:07.299 09:18:25 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:07.299 09:18:25 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:07.299 09:18:25 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:07.299 09:18:25 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:07.299 09:18:25 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:07.299 09:18:25 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:07.299 09:18:25 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:07.299 09:18:25 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:07.299 09:18:25 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:07.299 09:18:25 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:07.299 09:18:25 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:07.299 09:18:25 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:07.299 09:18:25 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:19:07.299 09:18:25 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:19:07.563 09:18:25 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:07.563 09:18:25 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:07.563 09:18:25 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:07.563 09:18:25 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:07.563 09:18:25 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:07.563 09:18:25 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:07.563 09:18:25 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:19:07.563 09:18:25 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:19:07.563 09:18:25 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:07.563 09:18:25 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:07.563 09:18:25 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:19:07.563 09:18:25 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90050 00:19:07.563 09:18:25 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:19:07.563 09:18:25 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90050 00:19:07.563 09:18:25 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 90050 ']' 00:19:07.563 09:18:25 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.563 09:18:25 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:07.563 09:18:25 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.563 09:18:25 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:07.563 09:18:25 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:07.563 [2024-10-15 09:18:25.319394] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:19:07.563 [2024-10-15 09:18:25.319633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90050 ] 00:19:07.821 [2024-10-15 09:18:25.474718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:07.822 [2024-10-15 09:18:25.615564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.822 [2024-10-15 09:18:25.615597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.756 09:18:26 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:08.756 09:18:26 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:19:08.756 09:18:26 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:19:08.756 09:18:26 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:08.756 09:18:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:09.014 09:18:26 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:19:09.014 09:18:26 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:09.014 09:18:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:09.014 09:18:26 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:19:09.014 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:19:09.014 ' 00:19:10.388 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:19:10.388 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:19:10.647 09:18:28 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:19:10.647 09:18:28 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:10.647 09:18:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:10.647 09:18:28 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:19:10.647 09:18:28 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:10.647 09:18:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:10.647 09:18:28 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:19:10.647 ' 00:19:12.023 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:19:12.023 09:18:29 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:19:12.023 09:18:29 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:12.023 09:18:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:12.023 09:18:29 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:19:12.023 09:18:29 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:12.023 09:18:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:12.023 09:18:29 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:19:12.023 09:18:29 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:19:12.590 09:18:30 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:19:12.590 09:18:30 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:19:12.590 09:18:30 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:19:12.590 09:18:30 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:12.590 09:18:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:12.590 09:18:30 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:19:12.590 09:18:30 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:12.590 09:18:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:12.590 09:18:30 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:19:12.590 ' 00:19:13.526 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:19:13.526 09:18:31 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:19:13.526 09:18:31 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:13.526 09:18:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:13.784 09:18:31 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:19:13.784 09:18:31 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:13.784 09:18:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:13.784 09:18:31 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:19:13.784 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:19:13.784 ' 00:19:15.161 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:19:15.161 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:19:15.161 09:18:33 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:19:15.161 09:18:33 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:15.161 09:18:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:15.418 09:18:33 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90050 00:19:15.418 09:18:33 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 90050 ']' 00:19:15.418 09:18:33 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 90050 00:19:15.418 09:18:33 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:19:15.418 09:18:33 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:15.418 09:18:33 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90050 00:19:15.418 09:18:33 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:15.418 09:18:33 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:15.418 09:18:33 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90050' 00:19:15.418 killing process with pid 90050 00:19:15.418 09:18:33 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 90050 00:19:15.418 09:18:33 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 90050 00:19:18.735 09:18:35 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:19:18.735 09:18:35 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90050 ']' 00:19:18.735 09:18:35 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90050 00:19:18.735 09:18:35 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 90050 ']' 00:19:18.735 09:18:35 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 90050 00:19:18.735 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (90050) - No such process 00:19:18.735 09:18:35 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 90050 is not found' 00:19:18.735 Process with pid 90050 is not found 00:19:18.735 09:18:35 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:19:18.735 09:18:35 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:19:18.735 09:18:35 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:19:18.735 09:18:35 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:19:18.735 00:19:18.735 real 0m11.025s 00:19:18.735 user 0m22.839s 00:19:18.735 sys 0m1.230s 00:19:18.735 09:18:35 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:18.735 09:18:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:18.735 ************************************ 00:19:18.735 END TEST spdkcli_raid 00:19:18.735 ************************************ 00:19:18.735 09:18:36 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:18.735 09:18:36 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:18.735 09:18:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:18.735 09:18:36 -- common/autotest_common.sh@10 -- # set +x 00:19:18.735 ************************************ 00:19:18.735 START TEST blockdev_raid5f 00:19:18.735 ************************************ 00:19:18.735 09:18:36 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:18.735 * Looking for test storage... 00:19:18.735 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:18.735 09:18:36 blockdev_raid5f -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:18.735 09:18:36 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lcov --version 00:19:18.735 09:18:36 blockdev_raid5f -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:18.735 09:18:36 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:18.735 09:18:36 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:18.735 09:18:36 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:18.735 09:18:36 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:18.735 09:18:36 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:19:18.735 09:18:36 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:19:18.735 09:18:36 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:19:18.735 09:18:36 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:19:18.735 09:18:36 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:19:18.735 09:18:36 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:19:18.735 09:18:36 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:19:18.735 09:18:36 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:18.735 09:18:36 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:19:18.735 09:18:36 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:19:18.735 09:18:36 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:18.735 09:18:36 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:18.735 09:18:36 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:19:18.735 09:18:36 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:19:18.735 09:18:36 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:18.735 09:18:36 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:19:18.735 09:18:36 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:19:18.735 09:18:36 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:19:18.735 09:18:36 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:19:18.735 09:18:36 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:18.735 09:18:36 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:19:18.735 09:18:36 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:19:18.735 09:18:36 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:18.735 09:18:36 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:18.735 09:18:36 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:19:18.736 09:18:36 blockdev_raid5f -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:18.736 09:18:36 blockdev_raid5f -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:18.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.736 --rc genhtml_branch_coverage=1 00:19:18.736 --rc genhtml_function_coverage=1 00:19:18.736 --rc genhtml_legend=1 00:19:18.736 --rc geninfo_all_blocks=1 00:19:18.736 --rc geninfo_unexecuted_blocks=1 00:19:18.736 00:19:18.736 ' 00:19:18.736 09:18:36 blockdev_raid5f -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:18.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.736 --rc genhtml_branch_coverage=1 00:19:18.736 --rc genhtml_function_coverage=1 00:19:18.736 --rc genhtml_legend=1 00:19:18.736 --rc geninfo_all_blocks=1 00:19:18.736 --rc geninfo_unexecuted_blocks=1 00:19:18.736 00:19:18.736 ' 00:19:18.736 09:18:36 blockdev_raid5f -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:18.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.736 --rc genhtml_branch_coverage=1 00:19:18.736 --rc genhtml_function_coverage=1 00:19:18.736 --rc genhtml_legend=1 00:19:18.736 --rc geninfo_all_blocks=1 00:19:18.736 --rc geninfo_unexecuted_blocks=1 00:19:18.736 00:19:18.736 ' 00:19:18.736 09:18:36 blockdev_raid5f -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:18.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.736 --rc genhtml_branch_coverage=1 00:19:18.736 --rc genhtml_function_coverage=1 00:19:18.736 --rc genhtml_legend=1 00:19:18.736 --rc geninfo_all_blocks=1 00:19:18.736 --rc geninfo_unexecuted_blocks=1 00:19:18.736 00:19:18.736 ' 00:19:18.736 09:18:36 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:18.736 09:18:36 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:19:18.736 09:18:36 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:18.736 09:18:36 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:18.736 09:18:36 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:18.736 09:18:36 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:18.736 09:18:36 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:18.736 09:18:36 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:18.736 09:18:36 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:19:18.736 09:18:36 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:19:18.736 09:18:36 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:19:18.736 09:18:36 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:19:18.736 09:18:36 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:19:18.736 09:18:36 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:19:18.736 09:18:36 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:19:18.736 09:18:36 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:19:18.736 09:18:36 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:19:18.736 09:18:36 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:19:18.736 09:18:36 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:19:18.736 09:18:36 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:19:18.736 09:18:36 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:19:18.736 09:18:36 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:19:18.736 09:18:36 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:19:18.736 09:18:36 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:19:18.736 09:18:36 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90340 00:19:18.736 09:18:36 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:18.736 09:18:36 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:18.736 09:18:36 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90340 00:19:18.736 09:18:36 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 90340 ']' 00:19:18.736 09:18:36 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.736 09:18:36 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:18.736 09:18:36 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.736 09:18:36 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:18.736 09:18:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:18.736 [2024-10-15 09:18:36.371292] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:19:18.736 [2024-10-15 09:18:36.371518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90340 ] 00:19:18.736 [2024-10-15 09:18:36.543416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.994 [2024-10-15 09:18:36.668804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.934 09:18:37 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:19.934 09:18:37 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:19:19.934 09:18:37 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:19:19.934 09:18:37 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:19:19.934 09:18:37 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:19:19.934 09:18:37 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.934 09:18:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:19.934 Malloc0 00:19:19.934 Malloc1 00:19:19.934 Malloc2 00:19:19.934 09:18:37 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.934 09:18:37 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:19:19.934 09:18:37 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.934 09:18:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:19.934 09:18:37 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.934 09:18:37 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:19:19.934 09:18:37 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:19:19.934 09:18:37 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.934 09:18:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:19.934 09:18:37 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.934 09:18:37 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:19:19.934 09:18:37 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.934 09:18:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:20.204 09:18:37 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.204 09:18:37 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:20.204 09:18:37 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.204 09:18:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:20.204 09:18:37 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.204 09:18:37 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:19:20.204 09:18:37 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:19:20.204 09:18:37 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.204 09:18:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:20.204 09:18:37 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:19:20.204 09:18:37 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.204 09:18:37 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:19:20.204 09:18:37 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:19:20.204 09:18:37 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "2370b6b1-10e3-4850-af3a-2b8914d829db"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "2370b6b1-10e3-4850-af3a-2b8914d829db",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "2370b6b1-10e3-4850-af3a-2b8914d829db",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "bf7dc928-3fe1-4624-8740-cc8e5b58e75f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "61d39200-70f2-495a-b142-ab9d2aa893a3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "0a798390-5091-4334-93ec-9b5e764e6b27",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:20.204 09:18:37 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:19:20.204 09:18:37 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:19:20.204 09:18:37 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:19:20.204 09:18:37 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90340 00:19:20.204 09:18:37 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 90340 ']' 00:19:20.204 09:18:37 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 90340 00:19:20.204 09:18:37 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:19:20.204 09:18:37 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:20.204 09:18:37 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90340 00:19:20.204 killing process with pid 90340 00:19:20.204 09:18:38 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:20.204 09:18:38 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:20.204 09:18:38 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90340' 00:19:20.204 09:18:38 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 90340 00:19:20.204 09:18:38 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 90340 00:19:23.504 09:18:41 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:23.504 09:18:41 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:23.504 09:18:41 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:19:23.504 09:18:41 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:23.504 09:18:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:23.504 ************************************ 00:19:23.504 START TEST bdev_hello_world 00:19:23.504 ************************************ 00:19:23.504 09:18:41 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:23.504 [2024-10-15 09:18:41.249230] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:19:23.504 [2024-10-15 09:18:41.249366] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90413 ] 00:19:23.763 [2024-10-15 09:18:41.428496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.763 [2024-10-15 09:18:41.565887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.330 [2024-10-15 09:18:42.164600] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:24.330 [2024-10-15 09:18:42.164663] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:19:24.330 [2024-10-15 09:18:42.164702] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:24.330 [2024-10-15 09:18:42.165363] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:24.330 [2024-10-15 09:18:42.165603] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:24.330 [2024-10-15 09:18:42.165649] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:24.330 [2024-10-15 09:18:42.165810] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:24.330 00:19:24.330 [2024-10-15 09:18:42.165846] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:26.233 00:19:26.233 real 0m2.656s 00:19:26.233 user 0m2.260s 00:19:26.233 sys 0m0.272s 00:19:26.233 09:18:43 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:26.233 09:18:43 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:26.233 ************************************ 00:19:26.233 END TEST bdev_hello_world 00:19:26.233 ************************************ 00:19:26.233 09:18:43 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:19:26.233 09:18:43 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:26.233 09:18:43 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:26.233 09:18:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:26.233 ************************************ 00:19:26.233 START TEST bdev_bounds 00:19:26.233 ************************************ 00:19:26.233 09:18:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:19:26.233 09:18:43 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90456 00:19:26.233 09:18:43 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:26.233 09:18:43 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:26.233 09:18:43 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90456' 00:19:26.233 Process bdevio pid: 90456 00:19:26.233 09:18:43 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90456 00:19:26.233 09:18:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 90456 ']' 00:19:26.233 09:18:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.233 09:18:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:26.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.233 09:18:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.233 09:18:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:26.233 09:18:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:26.233 [2024-10-15 09:18:43.979020] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:19:26.233 [2024-10-15 09:18:43.979158] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90456 ] 00:19:26.493 [2024-10-15 09:18:44.152558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:26.493 [2024-10-15 09:18:44.292308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.493 [2024-10-15 09:18:44.293055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.493 [2024-10-15 09:18:44.293085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:27.061 09:18:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:27.061 09:18:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:19:27.061 09:18:44 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:27.360 I/O targets: 00:19:27.360 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:19:27.360 00:19:27.360 00:19:27.360 CUnit - A unit testing framework for C - Version 2.1-3 00:19:27.360 http://cunit.sourceforge.net/ 00:19:27.360 00:19:27.360 00:19:27.360 Suite: bdevio tests on: raid5f 00:19:27.360 Test: blockdev write read block ...passed 00:19:27.360 Test: blockdev write zeroes read block ...passed 00:19:27.360 Test: blockdev write zeroes read no split ...passed 00:19:27.360 Test: blockdev write zeroes read split ...passed 00:19:27.644 Test: blockdev write zeroes read split partial ...passed 00:19:27.644 Test: blockdev reset ...passed 00:19:27.644 Test: blockdev write read 8 blocks ...passed 00:19:27.644 Test: blockdev write read size > 128k ...passed 00:19:27.644 Test: blockdev write read invalid size ...passed 00:19:27.644 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:27.644 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:27.644 Test: blockdev write read max offset ...passed 00:19:27.644 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:27.644 Test: blockdev writev readv 8 blocks ...passed 00:19:27.644 Test: blockdev writev readv 30 x 1block ...passed 00:19:27.644 Test: blockdev writev readv block ...passed 00:19:27.644 Test: blockdev writev readv size > 128k ...passed 00:19:27.644 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:27.644 Test: blockdev comparev and writev ...passed 00:19:27.644 Test: blockdev nvme passthru rw ...passed 00:19:27.644 Test: blockdev nvme passthru vendor specific ...passed 00:19:27.644 Test: blockdev nvme admin passthru ...passed 00:19:27.644 Test: blockdev copy ...passed 00:19:27.644 00:19:27.644 Run Summary: Type Total Ran Passed Failed Inactive 00:19:27.644 suites 1 1 n/a 0 0 00:19:27.644 tests 23 23 23 0 0 00:19:27.644 asserts 130 130 130 0 n/a 00:19:27.644 00:19:27.644 Elapsed time = 0.725 seconds 00:19:27.644 0 00:19:27.644 09:18:45 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90456 00:19:27.644 09:18:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 90456 ']' 00:19:27.644 09:18:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 90456 00:19:27.644 09:18:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:19:27.644 09:18:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:27.644 09:18:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90456 00:19:27.644 09:18:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:27.644 09:18:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:27.644 killing process with pid 90456 00:19:27.644 09:18:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90456' 00:19:27.644 09:18:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 90456 00:19:27.644 09:18:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 90456 00:19:29.544 09:18:47 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:29.544 00:19:29.544 real 0m3.201s 00:19:29.544 user 0m8.022s 00:19:29.544 sys 0m0.391s 00:19:29.544 09:18:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:29.544 09:18:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:29.544 ************************************ 00:19:29.544 END TEST bdev_bounds 00:19:29.544 ************************************ 00:19:29.544 09:18:47 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:29.544 09:18:47 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:29.544 09:18:47 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:29.545 09:18:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:29.545 ************************************ 00:19:29.545 START TEST bdev_nbd 00:19:29.545 ************************************ 00:19:29.545 09:18:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:29.545 09:18:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:29.545 09:18:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:29.545 09:18:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:29.545 09:18:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:29.545 09:18:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:19:29.545 09:18:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:29.545 09:18:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:19:29.545 09:18:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:29.545 09:18:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:29.545 09:18:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:29.545 09:18:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:19:29.545 09:18:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:19:29.545 09:18:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:29.545 09:18:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:19:29.545 09:18:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:29.545 09:18:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90520 00:19:29.545 09:18:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:29.545 09:18:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:29.545 09:18:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90520 /var/tmp/spdk-nbd.sock 00:19:29.545 09:18:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 90520 ']' 00:19:29.545 09:18:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:29.545 09:18:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:29.545 09:18:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:29.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:29.545 09:18:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:29.545 09:18:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:29.545 [2024-10-15 09:18:47.235173] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:19:29.545 [2024-10-15 09:18:47.235318] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:29.545 [2024-10-15 09:18:47.391006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.804 [2024-10-15 09:18:47.537603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.371 09:18:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:30.371 09:18:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:19:30.371 09:18:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:19:30.371 09:18:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:30.371 09:18:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:19:30.371 09:18:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:30.371 09:18:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:19:30.371 09:18:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:30.371 09:18:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:19:30.371 09:18:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:30.371 09:18:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:30.371 09:18:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:30.371 09:18:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:30.371 09:18:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:30.371 09:18:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:19:30.629 09:18:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:30.629 09:18:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:30.629 09:18:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:30.629 09:18:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:30.629 09:18:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:30.629 09:18:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:30.629 09:18:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:30.629 09:18:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:30.629 09:18:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:30.629 09:18:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:30.629 09:18:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:30.629 09:18:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:30.629 1+0 records in 00:19:30.629 1+0 records out 00:19:30.629 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495442 s, 8.3 MB/s 00:19:30.629 09:18:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.629 09:18:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:30.629 09:18:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:30.629 09:18:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:30.629 09:18:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:30.629 09:18:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:30.629 09:18:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:30.629 09:18:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:31.250 09:18:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:31.250 { 00:19:31.250 "nbd_device": "/dev/nbd0", 00:19:31.250 "bdev_name": "raid5f" 00:19:31.250 } 00:19:31.250 ]' 00:19:31.250 09:18:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:31.250 09:18:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:31.250 { 00:19:31.250 "nbd_device": "/dev/nbd0", 00:19:31.250 "bdev_name": "raid5f" 00:19:31.250 } 00:19:31.250 ]' 00:19:31.250 09:18:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:31.250 09:18:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:31.250 09:18:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:31.250 09:18:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:31.250 09:18:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:31.250 09:18:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:31.250 09:18:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:31.250 09:18:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:31.250 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:31.250 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:31.250 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:31.250 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:31.250 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:31.250 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:31.250 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:31.250 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:31.250 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:31.250 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:31.250 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:31.817 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:31.817 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:31.817 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:31.817 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:31.817 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:31.817 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:31.817 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:31.817 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:31.817 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:31.817 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:31.817 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:31.817 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:31.817 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:31.817 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:31.817 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:19:31.817 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:31.817 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:19:31.817 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:31.817 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:31.817 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:31.817 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:19:31.817 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:31.817 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:31.817 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:31.817 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:31.817 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:31.817 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:31.817 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:19:32.075 /dev/nbd0 00:19:32.075 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:32.075 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:32.075 09:18:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:32.075 09:18:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:32.075 09:18:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:32.075 09:18:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:32.075 09:18:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:32.075 09:18:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:32.075 09:18:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:32.075 09:18:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:32.075 09:18:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:32.075 1+0 records in 00:19:32.075 1+0 records out 00:19:32.075 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347317 s, 11.8 MB/s 00:19:32.075 09:18:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:32.075 09:18:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:32.075 09:18:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:32.075 09:18:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:32.075 09:18:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:32.075 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:32.075 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:32.075 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:32.075 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:32.075 09:18:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:32.333 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:32.333 { 00:19:32.333 "nbd_device": "/dev/nbd0", 00:19:32.333 "bdev_name": "raid5f" 00:19:32.333 } 00:19:32.333 ]' 00:19:32.333 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:32.333 { 00:19:32.333 "nbd_device": "/dev/nbd0", 00:19:32.333 "bdev_name": "raid5f" 00:19:32.333 } 00:19:32.333 ]' 00:19:32.333 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:32.333 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:32.333 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:32.333 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:32.333 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:19:32.333 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:19:32.333 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:19:32.333 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:19:32.333 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:19:32.333 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:32.333 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:32.333 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:32.334 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:32.334 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:32.334 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:32.334 256+0 records in 00:19:32.334 256+0 records out 00:19:32.334 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137537 s, 76.2 MB/s 00:19:32.334 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:32.334 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:32.334 256+0 records in 00:19:32.334 256+0 records out 00:19:32.334 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0373066 s, 28.1 MB/s 00:19:32.334 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:19:32.334 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:32.334 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:32.334 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:32.334 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:32.334 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:32.334 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:32.334 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:32.334 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:32.334 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:32.334 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:32.334 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:32.334 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:32.334 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:32.334 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:32.334 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:32.334 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:32.592 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:32.592 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:32.592 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:32.592 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:32.592 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:32.592 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:32.592 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:32.592 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:32.592 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:32.592 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:32.592 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:32.851 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:32.851 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:32.851 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:33.112 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:33.112 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:33.112 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:33.112 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:33.112 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:33.112 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:33.112 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:33.112 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:33.112 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:33.112 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:33.112 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:33.112 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:33.112 09:18:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:33.371 malloc_lvol_verify 00:19:33.371 09:18:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:33.630 14ec0310-7994-46c4-9957-35c756b7f0f7 00:19:33.630 09:18:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:33.888 3b29e67d-e633-4d4c-9987-780a5bf041c4 00:19:33.888 09:18:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:34.147 /dev/nbd0 00:19:34.147 09:18:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:34.147 09:18:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:34.147 09:18:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:34.147 09:18:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:34.147 09:18:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:34.147 mke2fs 1.47.0 (5-Feb-2023) 00:19:34.147 Discarding device blocks: 0/4096 done 00:19:34.147 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:34.147 00:19:34.147 Allocating group tables: 0/1 done 00:19:34.147 Writing inode tables: 0/1 done 00:19:34.147 Creating journal (1024 blocks): done 00:19:34.147 Writing superblocks and filesystem accounting information: 0/1 done 00:19:34.147 00:19:34.147 09:18:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:34.147 09:18:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:34.147 09:18:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:34.147 09:18:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:34.147 09:18:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:34.147 09:18:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:34.147 09:18:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:34.405 09:18:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:34.405 09:18:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:34.405 09:18:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:34.405 09:18:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:34.405 09:18:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:34.405 09:18:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:34.405 09:18:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:34.405 09:18:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:34.405 09:18:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90520 00:19:34.405 09:18:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 90520 ']' 00:19:34.405 09:18:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 90520 00:19:34.406 09:18:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:19:34.406 09:18:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:34.406 09:18:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90520 00:19:34.406 09:18:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:34.406 09:18:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:34.406 killing process with pid 90520 00:19:34.406 09:18:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90520' 00:19:34.406 09:18:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 90520 00:19:34.406 09:18:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 90520 00:19:36.314 09:18:53 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:36.314 00:19:36.314 real 0m6.778s 00:19:36.314 user 0m9.512s 00:19:36.314 sys 0m1.389s 00:19:36.314 09:18:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:36.314 09:18:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:36.314 ************************************ 00:19:36.314 END TEST bdev_nbd 00:19:36.314 ************************************ 00:19:36.314 09:18:53 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:19:36.314 09:18:53 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:19:36.314 09:18:53 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:19:36.314 09:18:53 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:19:36.314 09:18:53 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:36.314 09:18:53 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:36.314 09:18:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:36.314 ************************************ 00:19:36.314 START TEST bdev_fio 00:19:36.314 ************************************ 00:19:36.314 09:18:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:19:36.314 09:18:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:36.314 09:18:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:36.314 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:36.314 09:18:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:36.314 09:18:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:36.314 09:18:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:36.314 09:18:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:36.314 09:18:53 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:36.314 09:18:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:36.314 09:18:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:19:36.314 09:18:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:19:36.314 09:18:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:19:36.314 09:18:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:19:36.314 09:18:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:36.314 09:18:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:19:36.314 09:18:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:19:36.314 09:18:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:36.314 09:18:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:19:36.314 09:18:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:19:36.314 09:18:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:19:36.314 09:18:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:19:36.314 09:18:53 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:19:36.314 09:18:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:36.314 09:18:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:19:36.314 09:18:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:36.314 09:18:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:19:36.314 09:18:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:19:36.315 09:18:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:36.315 09:18:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:36.315 09:18:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:19:36.315 09:18:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:36.315 09:18:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:36.315 ************************************ 00:19:36.315 START TEST bdev_fio_rw_verify 00:19:36.315 ************************************ 00:19:36.315 09:18:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:36.315 09:18:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:36.315 09:18:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:36.315 09:18:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:36.315 09:18:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:36.315 09:18:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:36.315 09:18:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:19:36.315 09:18:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:36.315 09:18:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:36.315 09:18:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:36.315 09:18:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:36.315 09:18:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:19:36.315 09:18:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:36.315 09:18:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:36.315 09:18:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:19:36.315 09:18:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:36.315 09:18:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:36.574 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:36.574 fio-3.35 00:19:36.574 Starting 1 thread 00:19:48.787 00:19:48.787 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90726: Tue Oct 15 09:19:05 2024 00:19:48.787 read: IOPS=8217, BW=32.1MiB/s (33.7MB/s)(321MiB/10001msec) 00:19:48.787 slat (usec): min=24, max=106, avg=28.95, stdev= 3.42 00:19:48.787 clat (usec): min=13, max=606, avg=192.34, stdev=69.33 00:19:48.787 lat (usec): min=41, max=649, avg=221.29, stdev=69.80 00:19:48.787 clat percentiles (usec): 00:19:48.787 | 50.000th=[ 192], 99.000th=[ 326], 99.900th=[ 379], 99.990th=[ 453], 00:19:48.787 | 99.999th=[ 603] 00:19:48.787 write: IOPS=8594, BW=33.6MiB/s (35.2MB/s)(331MiB/9860msec); 0 zone resets 00:19:48.787 slat (usec): min=10, max=324, avg=25.21, stdev= 6.59 00:19:48.787 clat (usec): min=80, max=1510, avg=447.09, stdev=68.17 00:19:48.787 lat (usec): min=102, max=1544, avg=472.30, stdev=70.01 00:19:48.787 clat percentiles (usec): 00:19:48.787 | 50.000th=[ 449], 99.000th=[ 644], 99.900th=[ 963], 99.990th=[ 1319], 00:19:48.787 | 99.999th=[ 1516] 00:19:48.787 bw ( KiB/s): min=31192, max=36464, per=99.15%, avg=34087.16, stdev=1481.62, samples=19 00:19:48.787 iops : min= 7798, max= 9116, avg=8521.79, stdev=370.41, samples=19 00:19:48.787 lat (usec) : 20=0.01%, 100=5.79%, 250=31.66%, 500=53.80%, 750=8.52% 00:19:48.787 lat (usec) : 1000=0.17% 00:19:48.787 lat (msec) : 2=0.04% 00:19:48.787 cpu : usr=98.62%, sys=0.50%, ctx=36, majf=0, minf=7158 00:19:48.787 IO depths : 1=7.8%, 2=19.9%, 4=55.1%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:48.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.787 complete : 0=0.0%, 4=90.1%, 8=9.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.787 issued rwts: total=82184,84743,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.787 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:48.787 00:19:48.787 Run status group 0 (all jobs): 00:19:48.787 READ: bw=32.1MiB/s (33.7MB/s), 32.1MiB/s-32.1MiB/s (33.7MB/s-33.7MB/s), io=321MiB (337MB), run=10001-10001msec 00:19:48.787 WRITE: bw=33.6MiB/s (35.2MB/s), 33.6MiB/s-33.6MiB/s (35.2MB/s-35.2MB/s), io=331MiB (347MB), run=9860-9860msec 00:19:49.722 ----------------------------------------------------- 00:19:49.722 Suppressions used: 00:19:49.722 count bytes template 00:19:49.722 1 7 /usr/src/fio/parse.c 00:19:49.722 20 1920 /usr/src/fio/iolog.c 00:19:49.722 1 8 libtcmalloc_minimal.so 00:19:49.722 1 904 libcrypto.so 00:19:49.722 ----------------------------------------------------- 00:19:49.722 00:19:49.722 00:19:49.722 real 0m13.256s 00:19:49.722 user 0m13.332s 00:19:49.722 sys 0m0.667s 00:19:49.722 09:19:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:49.722 09:19:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:49.722 ************************************ 00:19:49.722 END TEST bdev_fio_rw_verify 00:19:49.722 ************************************ 00:19:49.722 09:19:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:49.722 09:19:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:49.722 09:19:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:49.722 09:19:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:49.722 09:19:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:19:49.722 09:19:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:19:49.722 09:19:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:19:49.722 09:19:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:19:49.722 09:19:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:49.722 09:19:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:19:49.722 09:19:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:19:49.722 09:19:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:49.722 09:19:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:19:49.722 09:19:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:19:49.722 09:19:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:19:49.722 09:19:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:19:49.722 09:19:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "2370b6b1-10e3-4850-af3a-2b8914d829db"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "2370b6b1-10e3-4850-af3a-2b8914d829db",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "2370b6b1-10e3-4850-af3a-2b8914d829db",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "bf7dc928-3fe1-4624-8740-cc8e5b58e75f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "61d39200-70f2-495a-b142-ab9d2aa893a3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "0a798390-5091-4334-93ec-9b5e764e6b27",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:49.722 09:19:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:49.722 09:19:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:49.722 09:19:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:49.722 09:19:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:49.722 /home/vagrant/spdk_repo/spdk 00:19:49.722 09:19:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:49.722 09:19:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:49.722 00:19:49.722 real 0m13.517s 00:19:49.722 user 0m13.452s 00:19:49.722 sys 0m0.792s 00:19:49.722 09:19:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:49.722 09:19:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:49.722 ************************************ 00:19:49.722 END TEST bdev_fio 00:19:49.722 ************************************ 00:19:49.722 09:19:07 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:49.722 09:19:07 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:49.722 09:19:07 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:19:49.722 09:19:07 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:49.722 09:19:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:49.722 ************************************ 00:19:49.722 START TEST bdev_verify 00:19:49.722 ************************************ 00:19:49.722 09:19:07 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:49.981 [2024-10-15 09:19:07.618528] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:19:49.981 [2024-10-15 09:19:07.618665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90894 ] 00:19:49.981 [2024-10-15 09:19:07.796954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:50.247 [2024-10-15 09:19:07.937908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.247 [2024-10-15 09:19:07.937921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.816 Running I/O for 5 seconds... 00:19:53.156 8229.00 IOPS, 32.14 MiB/s [2024-10-15T09:19:11.618Z] 7964.50 IOPS, 31.11 MiB/s [2024-10-15T09:19:12.994Z] 7830.67 IOPS, 30.59 MiB/s [2024-10-15T09:19:13.952Z] 7852.50 IOPS, 30.67 MiB/s [2024-10-15T09:19:13.952Z] 7930.20 IOPS, 30.98 MiB/s 00:19:56.056 Latency(us) 00:19:56.056 [2024-10-15T09:19:13.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.056 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:56.056 Verification LBA range: start 0x0 length 0x2000 00:19:56.056 raid5f : 5.02 4506.01 17.60 0.00 0.00 42868.95 429.28 46934.08 00:19:56.056 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:56.056 Verification LBA range: start 0x2000 length 0x2000 00:19:56.056 raid5f : 5.03 3438.27 13.43 0.00 0.00 56099.56 181.55 50597.23 00:19:56.056 [2024-10-15T09:19:13.952Z] =================================================================================================================== 00:19:56.056 [2024-10-15T09:19:13.952Z] Total : 7944.28 31.03 0.00 0.00 48602.54 181.55 50597.23 00:19:57.969 00:19:57.969 real 0m7.981s 00:19:57.969 user 0m14.631s 00:19:57.969 sys 0m0.330s 00:19:57.969 09:19:15 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:57.969 09:19:15 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:57.969 ************************************ 00:19:57.969 END TEST bdev_verify 00:19:57.969 ************************************ 00:19:57.969 09:19:15 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:57.969 09:19:15 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:19:57.969 09:19:15 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:57.969 09:19:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:57.969 ************************************ 00:19:57.970 START TEST bdev_verify_big_io 00:19:57.970 ************************************ 00:19:57.970 09:19:15 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:57.970 [2024-10-15 09:19:15.653593] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:19:57.970 [2024-10-15 09:19:15.653795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90994 ] 00:19:57.970 [2024-10-15 09:19:15.843966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:58.236 [2024-10-15 09:19:15.989258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.236 [2024-10-15 09:19:15.989286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.823 Running I/O for 5 seconds... 00:20:01.167 506.00 IOPS, 31.62 MiB/s [2024-10-15T09:19:19.999Z] 633.00 IOPS, 39.56 MiB/s [2024-10-15T09:19:20.936Z] 654.33 IOPS, 40.90 MiB/s [2024-10-15T09:19:21.883Z] 681.25 IOPS, 42.58 MiB/s [2024-10-15T09:19:22.183Z] 660.00 IOPS, 41.25 MiB/s 00:20:04.287 Latency(us) 00:20:04.287 [2024-10-15T09:19:22.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.287 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:04.287 Verification LBA range: start 0x0 length 0x200 00:20:04.287 raid5f : 5.34 332.90 20.81 0.00 0.00 9437284.98 414.97 435914.56 00:20:04.287 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:04.287 Verification LBA range: start 0x200 length 0x200 00:20:04.287 raid5f : 5.35 332.35 20.77 0.00 0.00 9408707.22 236.10 448735.58 00:20:04.287 [2024-10-15T09:19:22.183Z] =================================================================================================================== 00:20:04.287 [2024-10-15T09:19:22.183Z] Total : 665.24 41.58 0.00 0.00 9422996.10 236.10 448735.58 00:20:06.200 00:20:06.200 real 0m8.340s 00:20:06.200 user 0m15.286s 00:20:06.200 sys 0m0.338s 00:20:06.200 09:19:23 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:06.200 09:19:23 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:20:06.200 ************************************ 00:20:06.200 END TEST bdev_verify_big_io 00:20:06.200 ************************************ 00:20:06.200 09:19:23 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:06.200 09:19:23 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:20:06.200 09:19:23 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:06.200 09:19:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:06.200 ************************************ 00:20:06.200 START TEST bdev_write_zeroes 00:20:06.200 ************************************ 00:20:06.200 09:19:23 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:06.200 [2024-10-15 09:19:24.014507] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:20:06.200 [2024-10-15 09:19:24.014637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91098 ] 00:20:06.459 [2024-10-15 09:19:24.168212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.459 [2024-10-15 09:19:24.355644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.398 Running I/O for 1 seconds... 00:20:08.337 18447.00 IOPS, 72.06 MiB/s 00:20:08.337 Latency(us) 00:20:08.337 [2024-10-15T09:19:26.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.337 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:08.337 raid5f : 1.01 18418.21 71.95 0.00 0.00 6920.07 2203.61 18773.63 00:20:08.337 [2024-10-15T09:19:26.233Z] =================================================================================================================== 00:20:08.337 [2024-10-15T09:19:26.233Z] Total : 18418.21 71.95 0.00 0.00 6920.07 2203.61 18773.63 00:20:10.244 00:20:10.244 real 0m3.768s 00:20:10.244 user 0m3.287s 00:20:10.244 sys 0m0.344s 00:20:10.244 09:19:27 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:10.244 09:19:27 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:10.244 ************************************ 00:20:10.244 END TEST bdev_write_zeroes 00:20:10.244 ************************************ 00:20:10.244 09:19:27 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:10.244 09:19:27 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:20:10.244 09:19:27 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:10.244 09:19:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:10.244 ************************************ 00:20:10.244 START TEST bdev_json_nonenclosed 00:20:10.244 ************************************ 00:20:10.244 09:19:27 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:10.244 [2024-10-15 09:19:27.845381] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:20:10.244 [2024-10-15 09:19:27.845596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91157 ] 00:20:10.244 [2024-10-15 09:19:28.005848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.244 [2024-10-15 09:19:28.137425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.244 [2024-10-15 09:19:28.137530] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:10.244 [2024-10-15 09:19:28.137562] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:10.244 [2024-10-15 09:19:28.137573] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:10.813 00:20:10.813 real 0m0.652s 00:20:10.813 user 0m0.432s 00:20:10.813 sys 0m0.115s 00:20:10.813 ************************************ 00:20:10.813 END TEST bdev_json_nonenclosed 00:20:10.813 ************************************ 00:20:10.813 09:19:28 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:10.813 09:19:28 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:10.813 09:19:28 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:10.813 09:19:28 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:20:10.813 09:19:28 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:10.813 09:19:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:10.813 ************************************ 00:20:10.813 START TEST bdev_json_nonarray 00:20:10.813 ************************************ 00:20:10.813 09:19:28 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:10.813 [2024-10-15 09:19:28.556307] Starting SPDK v25.01-pre git sha1 0ea3371f3 / DPDK 24.03.0 initialization... 00:20:10.813 [2024-10-15 09:19:28.556505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91187 ] 00:20:10.813 [2024-10-15 09:19:28.706599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.134 [2024-10-15 09:19:28.857898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.134 [2024-10-15 09:19:28.858026] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:11.134 [2024-10-15 09:19:28.858049] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:11.134 [2024-10-15 09:19:28.858071] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:11.395 ************************************ 00:20:11.395 END TEST bdev_json_nonarray 00:20:11.395 ************************************ 00:20:11.395 00:20:11.395 real 0m0.675s 00:20:11.395 user 0m0.438s 00:20:11.395 sys 0m0.131s 00:20:11.395 09:19:29 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:11.395 09:19:29 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:11.395 09:19:29 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:20:11.395 09:19:29 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:20:11.395 09:19:29 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:20:11.395 09:19:29 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:20:11.395 09:19:29 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:20:11.395 09:19:29 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:11.395 09:19:29 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:11.395 09:19:29 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:20:11.395 09:19:29 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:20:11.395 09:19:29 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:20:11.395 09:19:29 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:20:11.395 00:20:11.395 real 0m53.172s 00:20:11.395 user 1m12.510s 00:20:11.395 sys 0m5.102s 00:20:11.395 09:19:29 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:11.395 09:19:29 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:11.395 ************************************ 00:20:11.395 END TEST blockdev_raid5f 00:20:11.395 ************************************ 00:20:11.395 09:19:29 -- spdk/autotest.sh@194 -- # uname -s 00:20:11.395 09:19:29 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:20:11.395 09:19:29 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:11.395 09:19:29 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:11.395 09:19:29 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:20:11.395 09:19:29 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:20:11.395 09:19:29 -- spdk/autotest.sh@256 -- # timing_exit lib 00:20:11.395 09:19:29 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:11.395 09:19:29 -- common/autotest_common.sh@10 -- # set +x 00:20:11.654 09:19:29 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:20:11.654 09:19:29 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:20:11.654 09:19:29 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:20:11.654 09:19:29 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:20:11.654 09:19:29 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:11.654 09:19:29 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:11.654 09:19:29 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:20:11.654 09:19:29 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:20:11.654 09:19:29 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:20:11.654 09:19:29 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:11.654 09:19:29 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:11.654 09:19:29 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:11.654 09:19:29 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:20:11.654 09:19:29 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:20:11.654 09:19:29 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:20:11.654 09:19:29 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:20:11.654 09:19:29 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:20:11.654 09:19:29 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:20:11.654 09:19:29 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:20:11.654 09:19:29 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:20:11.654 09:19:29 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:11.654 09:19:29 -- common/autotest_common.sh@10 -- # set +x 00:20:11.654 09:19:29 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:20:11.654 09:19:29 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:20:11.654 09:19:29 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:20:11.654 09:19:29 -- common/autotest_common.sh@10 -- # set +x 00:20:13.558 INFO: APP EXITING 00:20:13.558 INFO: killing all VMs 00:20:13.558 INFO: killing vhost app 00:20:13.558 INFO: EXIT DONE 00:20:14.124 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:14.124 Waiting for block devices as requested 00:20:14.124 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:14.384 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:15.323 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:15.323 Cleaning 00:20:15.323 Removing: /var/run/dpdk/spdk0/config 00:20:15.323 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:15.323 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:15.323 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:15.323 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:15.323 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:15.323 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:15.323 Removing: /dev/shm/spdk_tgt_trace.pid56839 00:20:15.323 Removing: /var/run/dpdk/spdk0 00:20:15.323 Removing: /var/run/dpdk/spdk_pid56604 00:20:15.323 Removing: /var/run/dpdk/spdk_pid56839 00:20:15.323 Removing: /var/run/dpdk/spdk_pid57068 00:20:15.323 Removing: /var/run/dpdk/spdk_pid57183 00:20:15.323 Removing: /var/run/dpdk/spdk_pid57228 00:20:15.323 Removing: /var/run/dpdk/spdk_pid57367 00:20:15.323 Removing: /var/run/dpdk/spdk_pid57385 00:20:15.323 Removing: /var/run/dpdk/spdk_pid57595 00:20:15.323 Removing: /var/run/dpdk/spdk_pid57707 00:20:15.323 Removing: /var/run/dpdk/spdk_pid57819 00:20:15.323 Removing: /var/run/dpdk/spdk_pid57947 00:20:15.323 Removing: /var/run/dpdk/spdk_pid58055 00:20:15.323 Removing: /var/run/dpdk/spdk_pid58094 00:20:15.323 Removing: /var/run/dpdk/spdk_pid58131 00:20:15.323 Removing: /var/run/dpdk/spdk_pid58207 00:20:15.323 Removing: /var/run/dpdk/spdk_pid58302 00:20:15.323 Removing: /var/run/dpdk/spdk_pid58755 00:20:15.323 Removing: /var/run/dpdk/spdk_pid58830 00:20:15.323 Removing: /var/run/dpdk/spdk_pid58904 00:20:15.323 Removing: /var/run/dpdk/spdk_pid58925 00:20:15.323 Removing: /var/run/dpdk/spdk_pid59093 00:20:15.323 Removing: /var/run/dpdk/spdk_pid59109 00:20:15.323 Removing: /var/run/dpdk/spdk_pid59275 00:20:15.323 Removing: /var/run/dpdk/spdk_pid59292 00:20:15.323 Removing: /var/run/dpdk/spdk_pid59367 00:20:15.323 Removing: /var/run/dpdk/spdk_pid59385 00:20:15.323 Removing: /var/run/dpdk/spdk_pid59460 00:20:15.323 Removing: /var/run/dpdk/spdk_pid59478 00:20:15.323 Removing: /var/run/dpdk/spdk_pid59686 00:20:15.323 Removing: /var/run/dpdk/spdk_pid59728 00:20:15.323 Removing: /var/run/dpdk/spdk_pid59816 00:20:15.323 Removing: /var/run/dpdk/spdk_pid61216 00:20:15.323 Removing: /var/run/dpdk/spdk_pid61427 00:20:15.323 Removing: /var/run/dpdk/spdk_pid61573 00:20:15.323 Removing: /var/run/dpdk/spdk_pid62227 00:20:15.323 Removing: /var/run/dpdk/spdk_pid62444 00:20:15.323 Removing: /var/run/dpdk/spdk_pid62590 00:20:15.323 Removing: /var/run/dpdk/spdk_pid63254 00:20:15.323 Removing: /var/run/dpdk/spdk_pid63591 00:20:15.323 Removing: /var/run/dpdk/spdk_pid63737 00:20:15.323 Removing: /var/run/dpdk/spdk_pid65133 00:20:15.323 Removing: /var/run/dpdk/spdk_pid65386 00:20:15.323 Removing: /var/run/dpdk/spdk_pid65537 00:20:15.323 Removing: /var/run/dpdk/spdk_pid66928 00:20:15.583 Removing: /var/run/dpdk/spdk_pid67192 00:20:15.583 Removing: /var/run/dpdk/spdk_pid67332 00:20:15.583 Removing: /var/run/dpdk/spdk_pid68734 00:20:15.583 Removing: /var/run/dpdk/spdk_pid69184 00:20:15.583 Removing: /var/run/dpdk/spdk_pid69331 00:20:15.583 Removing: /var/run/dpdk/spdk_pid70817 00:20:15.583 Removing: /var/run/dpdk/spdk_pid71082 00:20:15.583 Removing: /var/run/dpdk/spdk_pid71236 00:20:15.583 Removing: /var/run/dpdk/spdk_pid72739 00:20:15.583 Removing: /var/run/dpdk/spdk_pid72998 00:20:15.583 Removing: /var/run/dpdk/spdk_pid73149 00:20:15.583 Removing: /var/run/dpdk/spdk_pid74640 00:20:15.583 Removing: /var/run/dpdk/spdk_pid75139 00:20:15.583 Removing: /var/run/dpdk/spdk_pid75284 00:20:15.583 Removing: /var/run/dpdk/spdk_pid75435 00:20:15.583 Removing: /var/run/dpdk/spdk_pid75869 00:20:15.583 Removing: /var/run/dpdk/spdk_pid76606 00:20:15.583 Removing: /var/run/dpdk/spdk_pid77003 00:20:15.583 Removing: /var/run/dpdk/spdk_pid77723 00:20:15.583 Removing: /var/run/dpdk/spdk_pid78179 00:20:15.583 Removing: /var/run/dpdk/spdk_pid78952 00:20:15.583 Removing: /var/run/dpdk/spdk_pid79372 00:20:15.583 Removing: /var/run/dpdk/spdk_pid81382 00:20:15.583 Removing: /var/run/dpdk/spdk_pid81831 00:20:15.583 Removing: /var/run/dpdk/spdk_pid82294 00:20:15.583 Removing: /var/run/dpdk/spdk_pid84419 00:20:15.583 Removing: /var/run/dpdk/spdk_pid84904 00:20:15.583 Removing: /var/run/dpdk/spdk_pid85426 00:20:15.583 Removing: /var/run/dpdk/spdk_pid86494 00:20:15.584 Removing: /var/run/dpdk/spdk_pid86823 00:20:15.584 Removing: /var/run/dpdk/spdk_pid87771 00:20:15.584 Removing: /var/run/dpdk/spdk_pid88099 00:20:15.584 Removing: /var/run/dpdk/spdk_pid89043 00:20:15.584 Removing: /var/run/dpdk/spdk_pid89372 00:20:15.584 Removing: /var/run/dpdk/spdk_pid90050 00:20:15.584 Removing: /var/run/dpdk/spdk_pid90340 00:20:15.584 Removing: /var/run/dpdk/spdk_pid90413 00:20:15.584 Removing: /var/run/dpdk/spdk_pid90456 00:20:15.584 Removing: /var/run/dpdk/spdk_pid90711 00:20:15.584 Removing: /var/run/dpdk/spdk_pid90894 00:20:15.584 Removing: /var/run/dpdk/spdk_pid90994 00:20:15.584 Removing: /var/run/dpdk/spdk_pid91098 00:20:15.584 Removing: /var/run/dpdk/spdk_pid91157 00:20:15.584 Removing: /var/run/dpdk/spdk_pid91187 00:20:15.584 Clean 00:20:15.584 09:19:33 -- common/autotest_common.sh@1451 -- # return 0 00:20:15.584 09:19:33 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:20:15.584 09:19:33 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:15.584 09:19:33 -- common/autotest_common.sh@10 -- # set +x 00:20:15.843 09:19:33 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:20:15.843 09:19:33 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:15.843 09:19:33 -- common/autotest_common.sh@10 -- # set +x 00:20:15.843 09:19:33 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:15.843 09:19:33 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:15.843 09:19:33 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:15.843 09:19:33 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:20:15.843 09:19:33 -- spdk/autotest.sh@394 -- # hostname 00:20:15.843 09:19:33 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:16.102 geninfo: WARNING: invalid characters removed from testname! 00:20:42.647 09:19:59 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:45.198 09:20:02 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:47.736 09:20:05 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:50.268 09:20:07 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:52.800 09:20:10 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:55.336 09:20:12 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:57.922 09:20:15 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:57.922 09:20:15 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:20:57.922 09:20:15 -- common/autotest_common.sh@1691 -- $ lcov --version 00:20:57.922 09:20:15 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:20:57.922 09:20:15 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:20:57.922 09:20:15 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:20:57.922 09:20:15 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:20:57.922 09:20:15 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:20:57.922 09:20:15 -- scripts/common.sh@336 -- $ IFS=.-: 00:20:57.922 09:20:15 -- scripts/common.sh@336 -- $ read -ra ver1 00:20:57.922 09:20:15 -- scripts/common.sh@337 -- $ IFS=.-: 00:20:57.922 09:20:15 -- scripts/common.sh@337 -- $ read -ra ver2 00:20:57.922 09:20:15 -- scripts/common.sh@338 -- $ local 'op=<' 00:20:57.922 09:20:15 -- scripts/common.sh@340 -- $ ver1_l=2 00:20:57.922 09:20:15 -- scripts/common.sh@341 -- $ ver2_l=1 00:20:57.922 09:20:15 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:20:57.922 09:20:15 -- scripts/common.sh@344 -- $ case "$op" in 00:20:57.922 09:20:15 -- scripts/common.sh@345 -- $ : 1 00:20:57.922 09:20:15 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:20:57.922 09:20:15 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:57.922 09:20:15 -- scripts/common.sh@365 -- $ decimal 1 00:20:57.922 09:20:15 -- scripts/common.sh@353 -- $ local d=1 00:20:57.922 09:20:15 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:20:57.922 09:20:15 -- scripts/common.sh@355 -- $ echo 1 00:20:57.922 09:20:15 -- scripts/common.sh@365 -- $ ver1[v]=1 00:20:57.922 09:20:15 -- scripts/common.sh@366 -- $ decimal 2 00:20:57.922 09:20:15 -- scripts/common.sh@353 -- $ local d=2 00:20:57.922 09:20:15 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:20:57.922 09:20:15 -- scripts/common.sh@355 -- $ echo 2 00:20:57.922 09:20:15 -- scripts/common.sh@366 -- $ ver2[v]=2 00:20:57.922 09:20:15 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:20:57.923 09:20:15 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:20:57.923 09:20:15 -- scripts/common.sh@368 -- $ return 0 00:20:57.923 09:20:15 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:57.923 09:20:15 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:20:57.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.923 --rc genhtml_branch_coverage=1 00:20:57.923 --rc genhtml_function_coverage=1 00:20:57.923 --rc genhtml_legend=1 00:20:57.923 --rc geninfo_all_blocks=1 00:20:57.923 --rc geninfo_unexecuted_blocks=1 00:20:57.923 00:20:57.923 ' 00:20:57.923 09:20:15 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:20:57.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.923 --rc genhtml_branch_coverage=1 00:20:57.923 --rc genhtml_function_coverage=1 00:20:57.923 --rc genhtml_legend=1 00:20:57.923 --rc geninfo_all_blocks=1 00:20:57.923 --rc geninfo_unexecuted_blocks=1 00:20:57.923 00:20:57.923 ' 00:20:57.923 09:20:15 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:20:57.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.923 --rc genhtml_branch_coverage=1 00:20:57.923 --rc genhtml_function_coverage=1 00:20:57.923 --rc genhtml_legend=1 00:20:57.923 --rc geninfo_all_blocks=1 00:20:57.923 --rc geninfo_unexecuted_blocks=1 00:20:57.923 00:20:57.923 ' 00:20:57.923 09:20:15 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:20:57.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.923 --rc genhtml_branch_coverage=1 00:20:57.923 --rc genhtml_function_coverage=1 00:20:57.923 --rc genhtml_legend=1 00:20:57.923 --rc geninfo_all_blocks=1 00:20:57.923 --rc geninfo_unexecuted_blocks=1 00:20:57.923 00:20:57.923 ' 00:20:57.923 09:20:15 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:57.923 09:20:15 -- scripts/common.sh@15 -- $ shopt -s extglob 00:20:57.923 09:20:15 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:20:57.923 09:20:15 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:57.923 09:20:15 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:57.923 09:20:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.923 09:20:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.923 09:20:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.923 09:20:15 -- paths/export.sh@5 -- $ export PATH 00:20:57.923 09:20:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.923 09:20:15 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:20:57.923 09:20:15 -- common/autobuild_common.sh@486 -- $ date +%s 00:20:57.923 09:20:15 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728984015.XXXXXX 00:20:57.923 09:20:15 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728984015.zumjSc 00:20:57.923 09:20:15 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:20:57.923 09:20:15 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:20:57.923 09:20:15 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:20:57.923 09:20:15 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:20:57.923 09:20:15 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:20:57.923 09:20:15 -- common/autobuild_common.sh@502 -- $ get_config_params 00:20:57.923 09:20:15 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:20:57.923 09:20:15 -- common/autotest_common.sh@10 -- $ set +x 00:20:57.923 09:20:15 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:20:57.923 09:20:15 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:20:57.923 09:20:15 -- pm/common@17 -- $ local monitor 00:20:57.923 09:20:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:57.923 09:20:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:57.923 09:20:15 -- pm/common@25 -- $ sleep 1 00:20:57.923 09:20:15 -- pm/common@21 -- $ date +%s 00:20:57.923 09:20:15 -- pm/common@21 -- $ date +%s 00:20:57.923 09:20:15 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728984015 00:20:57.923 09:20:15 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728984015 00:20:57.923 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728984015_collect-cpu-load.pm.log 00:20:57.923 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728984015_collect-vmstat.pm.log 00:20:58.863 09:20:16 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:20:58.863 09:20:16 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:20:58.863 09:20:16 -- spdk/autopackage.sh@14 -- $ timing_finish 00:20:58.863 09:20:16 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:58.863 09:20:16 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:20:58.863 09:20:16 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:58.863 09:20:16 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:20:58.863 09:20:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:20:58.863 09:20:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:20:58.863 09:20:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:58.863 09:20:16 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:20:58.863 09:20:16 -- pm/common@44 -- $ pid=92718 00:20:58.863 09:20:16 -- pm/common@50 -- $ kill -TERM 92718 00:20:58.863 09:20:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:58.863 09:20:16 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:20:58.863 09:20:16 -- pm/common@44 -- $ pid=92720 00:20:58.863 09:20:16 -- pm/common@50 -- $ kill -TERM 92720 00:20:58.863 + [[ -n 5422 ]] 00:20:58.863 + sudo kill 5422 00:20:59.130 [Pipeline] } 00:20:59.146 [Pipeline] // timeout 00:20:59.152 [Pipeline] } 00:20:59.166 [Pipeline] // stage 00:20:59.171 [Pipeline] } 00:20:59.185 [Pipeline] // catchError 00:20:59.195 [Pipeline] stage 00:20:59.197 [Pipeline] { (Stop VM) 00:20:59.210 [Pipeline] sh 00:20:59.490 + vagrant halt 00:21:02.026 ==> default: Halting domain... 00:21:10.164 [Pipeline] sh 00:21:10.445 + vagrant destroy -f 00:21:13.742 ==> default: Removing domain... 00:21:13.789 [Pipeline] sh 00:21:14.095 + mv output /var/jenkins/workspace/raid-vg-autotest_2/output 00:21:14.105 [Pipeline] } 00:21:14.121 [Pipeline] // stage 00:21:14.126 [Pipeline] } 00:21:14.141 [Pipeline] // dir 00:21:14.147 [Pipeline] } 00:21:14.163 [Pipeline] // wrap 00:21:14.170 [Pipeline] } 00:21:14.183 [Pipeline] // catchError 00:21:14.196 [Pipeline] stage 00:21:14.198 [Pipeline] { (Epilogue) 00:21:14.212 [Pipeline] sh 00:21:14.500 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:21.118 [Pipeline] catchError 00:21:21.120 [Pipeline] { 00:21:21.133 [Pipeline] sh 00:21:21.416 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:21.416 Artifacts sizes are good 00:21:21.424 [Pipeline] } 00:21:21.439 [Pipeline] // catchError 00:21:21.450 [Pipeline] archiveArtifacts 00:21:21.458 Archiving artifacts 00:21:21.565 [Pipeline] cleanWs 00:21:21.577 [WS-CLEANUP] Deleting project workspace... 00:21:21.577 [WS-CLEANUP] Deferred wipeout is used... 00:21:21.583 [WS-CLEANUP] done 00:21:21.585 [Pipeline] } 00:21:21.600 [Pipeline] // stage 00:21:21.605 [Pipeline] } 00:21:21.622 [Pipeline] // node 00:21:21.630 [Pipeline] End of Pipeline 00:21:21.676 Finished: SUCCESS